AI-based framework creates realistic textures in the virtual world



[ad_1]

Examples of "texture synthesis" using a unique technique based on artificial intelligence that teaches a network to develop small textures. This data-driven method takes advantage of an AI technique called "adverse array generators" (GAN) to form computers to extend the textures of a sample to more instances. large that best resemble the original sample. Credit: Zhen Zhu, Xiang Bai, Daniel Lischinski, Daniel Cohen-Or and Hui Huang

Many designers in the virtual world find it difficult to design complex textures or complex and credible patterns on a large scale. Indeed, what is called "texture synthesis", the design of precise textures such as the ripples of water in a river, concrete walls or leaf patterns, remains a difficult task for artists. A plethora of non-stationary textures in the "real world" could be recreated in games or virtual worlds, but existing techniques are tedious and time consuming.

To meet this challenge, a global team of computer scientists has come up with a unique technique based on artificial intelligence that forms a network to learn how to develop smaller textures into larger ones. The researchers' data-driven method takes advantage of an AI technique called GAN (Generative accusatory networks) to form computers to extend the textures of a sample to larger instances that resemble the better to the original sample. textures without any high-level or semantic description of the large-scale structure, "says Yang Zhou, senior author of the work and badistant professor at the University of Shenzhen and Huazhong University of Science & Technology. "It can cope with very difficult textures, which, to our knowledge, no other existing method can handle. The results are realistic designs produced in high resolution, efficiently, and on a much larger scale."

The texture synthesis based on an example is to generate a texture, usually larger in size than the input, which captures closely the visual characteristics of the sample input – without being any in fact identical – and retains a realistic appearance. Examples of nonstationary textures include textures with large scale irregular structures, or textures that exhibit spatial variance in certain attributes such as color, local orientation, and local scale. In this article, researchers tested their method on such intricate examples as peabad feathers and tree trunk ripples, which seem endless in their repetitive patterns.

Zhou and his collaborators, including Zhen Zhu and Xiang Bai (Huazhong University), Dani Lischinski (Hebrew University of Jerusalem), Daniel Cohen-Or (Shenzhen University and Tel Aviv University) and Hui Huang (Shenzhen University) , will present their work at SIGGRAPH 2018, August 12-16 in Vancouver, British Columbia. This annual gathering showcases the world's best professionals, academics and creative minds at the forefront of computer graphics and interactive techniques.

Their method is to form a generative network called generator to learn how to expand (double the spatial extent of) an arbitrary cropped texture block from an example, so that the extended result is visually similar to an example block containing the appropriate size (twice as large). The visual similarity between the automatically expanded block and the actual containing block is evaluated using a discriminant (discriminator) network. As typical of GANs, the discriminator is driven parallel to the generator to distinguish between real large blocks of the example and those generated by the generator.

Zhou said, "Surprisingly, we found that using such a conceptually simple, self-supervised inconsistent training strategy, the formed network works almost perfectly on a wide range of textures, including textures at once stationary and non-stationary. "

The tool is intended to help texture artists in the design of video games, virtual reality, and animation. Once self-supervising contradictory training performed for each given texture sample, the frame can be used to automatically generate extended textures, up to twice the size of the original sample. Later, the researchers hope that their system will be able to extract high-level information from unsupervised textures.

In addition, in future work, the team intends to form a "universal" model on a large scale. texture dataset, as well as to increase the control of the user. For texture artists, controlled synthesis with user interaction will probably be even more useful since artists tend to manipulate textures for their own design.

For paper and video, visit the project page of the team.


Learn more:
Skin receptors transmit the feeling of texture through vibrations

Source:
Association for Computer Science

[ad_2]
Source link