Do these fake people created by AI seem real to you?



[ad_1]



Now there are companies that sell fake people. On the Generated.Photos website, you can buy a fake “One-Time-Care-Free” person for $ 2.99, or 1,000 people for $ 1,000. If you just need a few fake people – for characters in a video game or to build your business website seem more diverse – you can get their photos for free at ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or whatever ethnicity you choose. If you want your fake person to be animated, a company called Rosebud.AI can do that and can even get them talking.

These simulated people are starting to appear on the Internet, used as masks by real people with nefarious intentions: spies who put on attractive faces in an attempt to infiltrate the intelligence community; right-wing propagandists hiding behind fake profiles, photos and all; online stalkers who troll their targets with a friendly face.

We created our own AI system to understand how easy it is to generate different fake faces.

The AI ​​system sees each face as a complex mathematical figure, a range of values ​​that can be shifted. Choosing different values ​​- like those that determine the size and shape of the eyes – can change the entire image.

For other qualities our system used a different approach. Instead of moving the values ​​that determine specific parts of the image, the system first generated two images to establish the start and end points for all the values, and then created images in between.

Creating these types of fake images has only become possible in recent years thanks to a new type of artificial intelligence called a generative antagonist network. In essence, you are feeding a computer program with a bunch of photos of real people. It studies them and tries to find its own photos of people, while another part of the system tries to detect which of those photos are fake.

Coming back and forth makes the end product more and more indistinguishable from reality. Portraits for this story were created by The Times using GAN software made available to the public by computer graphics company Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are faced with not only unique portraits of fake people, but entire collections of them – at a party. with fake friends, hanging out with their fake dogs, holding their fake babies. It will become more and more difficult to tell who is really online and who is the figment of a computer.

“When the technology first appeared in 2014, it was bad – it looked like The Sims,” said Camille François, a disinformation researcher whose job it is to analyze social media manipulation. “It’s a reminder of how quickly technology can change. Detection will only get more difficult over time. ”

Advances in facial tampering have been made possible in part because the technology has become so much better at identifying key facial features. You can use your face to unlock your smartphone or have your photo software sort through your thousands of photos and show you only your kid’s. Facial recognition programs are used by law enforcement to identify and arrest criminal suspects (and also by some activists to reveal the identities of police officers who cover their name tags in an attempt to remain anonymous). A company called Clearview AI has scratched the web with billions of public photos – shared online by everyday users – to create an app that can recognize a stranger from a single photo. Technology promises superpowers: the ability to organize and process the world in ways that were not possible before.

But facial recognition algorithms, like other AI systems, are not perfect. Due to the underlying bias in the data used to train them, some of these systems are not as good, for example, at recognizing people of color. In 2015, an early image detection system developed by Google called two blacks “gorillas,” possibly because the system had received far more photos of gorillas than dark-skinned people.

Additionally, cameras – the eyes of facial recognition systems – are not as effective at capturing dark-skinned people; this unfortunate standard dates back to the early days of film development, when photos were calibrated to better show the faces of people with fair skin. The consequences can be serious. In January, a black man in Detroit named Robert Williams was arrested for a crime he did not commit due to an incorrect facial recognition match.

Artificial intelligence can make our life easier, but it’s ultimately as flawed as we are, because we’re behind it all. Humans choose how AI systems are made and what data they are exposed to. We choose the voices that teach virtual assistants to hear, causing these systems to not understand people with accents. We design a computer program to predict a person’s criminal behavior by providing them with data on past decisions made by human judges – and in doing so, incorporating the biases of those judges. We label the pictures that teach computers to see; they then associate the glasses with “dweebs” or “nerds”.

You can spot some of the mistakes and patterns that we’ve seen that our AI system has repeated when it summons fake faces.


Humans are wrong, of course: we overlook or overcome the flaws in these systems, too quickly convinced that computers are hyper-rational, objective, always correct. Studies have shown that in situations where humans and computers must cooperate to make a decision – to identify fingerprints or human faces – people routinely misidentify when a computer prompts them to do so. In the early days of dashboard GPS systems, drivers would follow device directions until they failed, sending cars out into lakes, off cliffs, and into trees.

Is it humility or pride? Do we value human intelligence too little – or overestimate it, assuming we’re so smart we can still create smarter things?

Google and Bing algorithms sort the knowledge of the world for us. Facebook’s news feed filters out updates from our social circles and decides what is important enough to show us. With the autonomous driving capabilities of cars, we put our safety in the hands (and eyes) of software. We have great faith in these systems, but they can be as fallible as we are.

More articles on artificial intelligence:

Facial recognition training on some new furry friends: bears

Antibodies are good. Are Machine Made Molecules Better?

These algorithms could end the world’s deadliest killer



[ad_2]

Source link