New Deepfake spotting tool proves 94% effective – Here’s the secret to its success



[ad_1]

Spot the Deepfake

Question: Which of these people are fake? Answer: All. Credit: ww.thispersondoesnotexist.com and the University of Buffalo

University of Buffalo The deepfake spotting tool is 94% effective with portrait photos, according to the study.

Computer scientists at the University of Buffalo have developed a tool that automatically identifies fake photos by analyzing light reflections in the eyes.

The tool was found to be 94% effective with portrait photos in experiments described in an accepted paper at the IEEE International Conference on Acoustics, Speech and Signal Processing to be held in Toronto in June, in Canada.

“The cornea almost looks like a perfect hemisphere and is very reflective,” says lead author Siwei Lyu, PhD, SUNY Empire professor of innovation in the Department of Computer Science and Engineering. “So anything that comes to the eye with light from these sources will have an image on the cornea.

“Both eyes should have very similar reflective patterns because they are seeing the same thing. This is something we usually don’t notice when we look at a face, ”said Lyu, an expert in multimedia and digital forensics who testified before Congress.

The document, “Exposing GAN-Generated Faces Using Inconsistent Corneal Specular Highlights”, is available on the open access repository arXiv.

The co-authors are Shu Hu, third-year computer science doctoral student and research assistant at UB’s Media Forensic Lab, and Yuezun Li, PhD, former principal investigator at UB who is now a lecturer at Ocean University of China. Center on Artificial Intelligence.

Tool maps face, examines tiny differences in eyes

When we look at something, the image of what we see is reflected in our eyes. In a real photo or video, the highlights on the eyes usually appear to be the same shape and color.

However, most of the images generated by artificial intelligence – including the Generative Adversary Network (GAN) images – fail to do so accurately or consistently, possibly due to many photos combined to generate the false image.

Lyu’s tool exploits this loophole by spotting tiny deviations of the light reflected in the eyes of deepfake images.

To conduct the experiments, the research team obtained real images from Flickr Faces-HQ, as well as fake images from www.thispersondoesnotexist.com, a repository of AI-generated faces that appear realistic but are in fact fake. . All of the images looked like portraits (real people and fake people looking directly into the camera in good lighting) and 1,024 by 1,024 pixels.

The tool works by mapping each face. It then examines the eyes, then the eyeballs, and finally the light reflected in each eyeball. It compares in incredible detail the potential differences in shape, light intensity and other characteristics of reflected light.

“ Deepfake-o-meter ” and commitment to fight against deepfakes

Although promising, Lyu’s technique has limits.

On the one hand, you need a reflected light source. In addition, mismatched light reflections from the eyes can be corrected during image editing. Additionally, the technique only looks at the individual pixels reflected in the eyes – not the shape of the eye, the shapes in the eyes, or the nature of what is reflected in the eyes.

Finally, the technique compares the highlights in both eyes. If the subject is missing an eye or the eye is not visible, the technique fails.

Lyu, who has researched machine learning and computer vision projects for over 20 years, has previously proven that deepfake videos tend to have inconsistent or nonexistent flash rates for video subjects.

In addition to testifying before Congress, he helped Facebook in 2020 with its global deepfake detection challenge, and he helped create the “Deepfake-o-meter,” an online resource to help the average person test for see if the video she watched is, in fact, a deepfake.

He says identifying deepfakes is increasingly important, especially given the hyper-partisan world full of race and gender tensions and the dangers of misinformation – especially violence.

“Unfortunately, a lot of these kinds of fake videos were created for pornographic purposes, and that (caused) a lot of… psychological damage to the victims,” Lyu says. “There is also the potential political impact, the fake video showing politicians saying something or doing something that they are not supposed to do. It’s bad.”



[ad_2]

Source link