[ad_1]
Earlier this month, you may have already seen a website, ThisPersonDoesNotExist.com, touring, using AI to generate amazingly realistic fake faces. Well, here is the following: WhichFaceIsReal.com, which allows you to test your ability to distinguish the counterfeits generated by the AI from the genuine article. Go to the website and click on who you think is the real person!
WhichFaceIsReal.com also has a higher goal. It was created by two University of Washington scholars, Jevin West and Carl Bergstrom, both of whom are studying how information is spreading in society. They believe that the proliferation of counterfeits generated by artificial intelligence could be a source of problems, undermining the trust of society in fact, and wishing to educate the masses.
"When a new technology like this comes up, the most dangerous time is when the technology exists but the public is not aware of it," Bergstrom said. The edge. "This is the time when it can be used most efficiently."
"So we try to educate the public, to make them aware that this technology already exists," says West. "Just like afterwards, most people have realized that you can create an image with Photoshop."
Both sites use an automatic learning method called Generative Accidental Network (or GAN) to generate their fake. These networks work by browsing huge piles of data (in this case, lots of portraits of real people); learn the patterns that compose them, then try to replicate what they saw.
If the GANs are so good, it's that they test themselves. Part of the network generates faces and the other compares them to training data. If it can make a difference, the generator is returned to the drawing table to improve its work. Think of it as a strict art teacher who will not let you leave class until you have drawn the correct number of pictures on your charcoal portrait. AI Picassos has no place – only realism.
These techniques can be used to manipulate audio and video as well as images. Although there are limits to what such systems can do (you can not type a caption for an image you want to exist and have it magically), they are improving steadily. Deepfakes can transform videos of politicians into puppets and even turn you into a great dancer.
Bergstrom and West note that a case of malicious use could be spreading misinformation after a terrorist attack. For example, the AI could be used to generate a fake culprit that has been broadcast online and broadcast on social networks.
In these scenarios, reporters usually try to check the source of an image with the help of tools such as Google's reverse image search. But that would not work with a fake AI. "If you want to inject wrong information into a situation like this, if you publish a photo of the author and that's someone else, it will be corrected very quickly," says Bergstrom. "But what if you use a photo of someone who does not exist at all?" Think about the difficulty of locating that. "
They note that academics and researchers are developing many tools for detecting deepfakes. "I understand that right now it's very easy to do," notes West. And by doing the test above, you probably found that you could tell the difference between the faces generated by AI and the real people. There are a number of tell, including asymmetrical faces, misaligned teeth, unrealistic hair and ears that do not look like ears.
But these fake ones will improve. In three years [these fakes] will be indistinguishable, "says West. And when that happens, knowing will be half the battle. Bergstrom says, "Our message is not that people should not believe in anything. Our message is the opposite: he is not credulous. "
[ad_2]
Source link