Samsung now has a way to make ultra-realistic fake videos with just one photo



[ad_1]

The fake videos become really good, and they become even easier to achieve. For innocuous reasons like a moving avatar, for example, it's a pretty cool development. But for more insidious use cases, such as exploiting technology to harass someone online, it's confusing.

Researchers from the Samsung AI Center in Moscow and the Institute of Science and Technology of Skolkovo on Monday published an article entitled "Contradictory Learning of Some Models of Realistic Speaking Neuron Heads" illustrating how their system can create a head virtual conversation with a small photo. Over the past year, researchers have come up with a number of new ways to create video games – in which a person uses machine learning to create an ultra-realistic fake video of someone's – there was still a crucial precondition for creating one. You have to collect a bunch of images from an individual to generate a realistic rendering of these images.

Of course, it's not impossible to do if you have an open source photo scratching tool and that person has posted enough public photos or videos online. But this remained an obstacle and offered those who could be victims the opportunity to pay more attention to the amount of exploitable data they could share online. But this new system releases some of what was once a necessary and tedious step.

The researchers write in the journal that their system can create "speaking head models from a handful of photographs" and "with limited training time". When a person develops a simulation, she has to feed the wealth of a person's pictures (training data). together) in a deep neural network that will then generate a manipulated video. These researchers claim that their system not only requires a photo, but that it does not take so much time to learn training data in order to spit out the fake video.

The researchers wrote in the study that, by "perfect realism", the model was formed on 32 images, a number still very small and easy to collect in the current era of online sharing. It is hard not to imagine that anyone can glean these images from a simple superficial search on someone's Facebook page. More importantly, it shows that this technology is growing rapidly.

There are examples in the study showing generated speaking head models formed on a single image, and even still images indicate the range in which this system can take a real image and bring it to life, with images of Mona Lisa and from The girl to the pearl. Loop of ear making a range of expressions. The videos are even more annoying: with a single source image, the system has been able to generate realistic talking head models. And for many examples, it is not easy to identify that they are completely wrong.

In the paper, researchers note that this type of technology could have "practical applications for telepresence, including and multi-player games, as well as the special effects sector. As technology companies evolve into animated avatars and virtual reality, this technology seems to be a natural step in the progression towards even more personalized and realistic visuals. And this also seems to be a natural next step for places such as movie studios that might want, for example, to create a posthumous reenactment of an actor in record time.

But it would be irresponsible to extol this technology without underlining the very real threat it poses to the victims of manipulated videos. In fact, when Deepfakes entered public opinion, it was only a matter of time before technology was put on the web as a weapon against women. The sad reality is that there will always be shitty people online who will exploit this type of technology, and as technology evolves, there will always be people who try to make it easier and more effective.

[ad_2]

Source link