We receive a lot of information about artificial intelligence and the networks it can weave to spread false information. Leave room for more news, this time for NVIDIA's tour de force to provide fake images. Wait, we see pictures of a man and a woman and they both look totally authentic, but they are computer generated.
What's going on? An NVIDIA team has shown that it can mimic the appearance of real photos – better than you can imagine – with a new generator. Paul Lilly in Hot material: not only do not believe everything you read, but now do not believe everything you see.
Their method does not require human supervision. If you enter the "brain" of their concept, the generator does not treat an image as an image but rather as a collection of styles. Coarse. Middle. Well.
In short, it is easier than before to generate fake credible images. Technical observers consult Thispersondoesnotexist.com, which uses the code previously published by Nvidia researchers on GitHub. This site instantly generates new facial images.
Every time you load the page on the site, an algorithm generates a new human face from scratch. "The website was created by Phillip Wang," reported SlashGear, "who used StyleGAN, the generative contradictory network of NVIDIA, to create it." It's a pretty simple website when it comes to design because it only shows a single image. a human face when you visit it. "
Pretty simple, indeed. If you go to thispersondoesnotexist.com, you see a woman's face, for example click Refresh, Bingo, another face entirely, from the adult male to the adult female, to the girl, to the girl. 39, teenager, etc. That's all. No text. No ads. What is all this? And more importantly, why do observers talk about it?
Looking at the site thispersondoesnot exist, Lilly explained what to expect if you click on the site; it will generate "a new facial image from a vector to 512 dimensions each time you press the refresh button of your browser".
So, what is this generative accusatory network (GAN) nicknamed StyleGAN that SlashGear mentionned?
Rani Horev, LyrnAI, had a useful explanation in the context of images: "Their goal is to synthesize artificial samples, such as images, that are not distinguishable from authentic images." A common example of GAN application is to generate artificial face images from a dataset of celebrity faces. "
All paths lead to an article on arXiv, written by NVIDIA researchers Tero Karras, Samuli Laine and Timo Aila. The document titled "A Style-Based Builder Architecture for Generative Conflicting Networks". They discussed a "new architecture" for GANs, one that leads to a "separation learned automatically and unattended high level attributes".
NVIDIA researchers released StyleGAN at github.com/NVlabs/stylegan, according to a Facebook post earlier this month.
Jackson Ryan of CNET said, "The neural network is versatile enough not to create faces, but rooms, cars and even cats."
synchronized talked about this versatility. "The researchers achieved impressive results using the new generator to create room, car and cat images with the LSUN (Large Scene Scene Understanding) dataset."
Jesus Diaz in Quick business, using a casting example, provides a useful snapshot of StyleGAN as a generative accusative network. "It is composed of two algorithms: the first generates cats based on its learning on thousands of cat images, while the second evaluates the computer-generated images and compares them to real photos. IA gives his opinion to the first on his work, until he finally manages to create portraits that are always credible. "
Diaz noted that the authors of the article indicated that a combination of technologies had been used to "eliminate unimportant noise for the new synthetic face – for example, distinguishing a bow on the head of a person." a cat and throw it as superfluous ".
Jessica Miley in Interesting engineering"It is hoped that these GANs can be used to develop complete virtual worlds using automated methods instead of hard coding."
Nvidia's face-making approach is truly GAN-tastic
© 2019 Science X Network
Very realistic person: researchers take false images to another level (February 17, 2019)
recovered on February 17, 2019
This document is subject to copyright. Apart from any fair use for study or private research purposes, no
part may be reproduced without written permission. Content is provided for information only.