Why does Facebook teach an insect-like robot to walk?


Daisy, who looks like a giant bug, is part of a science-based robot project within Facebook's Research Group on Artificial Intelligence (FAIR). Since last summer, FAIR scientists have been helping robots learn to walk and grab objects. The goal is that they learn in a way similar to the way people learn these skills: by exploring the world around them and by using trial and error.

Many people may not know that the largest social network in the world is also tinkering with robots. But the work of this group is not supposed to appear, for example, in your Facebook news feed. On the contrary, it is hoped that the project will help researchers in artificial intelligence to advance an artificial intelligence capable of learning in a more autonomous way. They also want robots to learn with less data than humans often need to collect before the AI ​​can complete tasks.

In theory, this work could eventually help to improve the kind of artificial intelligence activities that many technology companies (including Facebook), such as translating words from one language to another, are working on. another or the recognition of people and objects in pictures.

In addition to Daisy, Facebook researchers are working with robots made up of multi-articulated arms and hands of robots equipped with touch sensors at their fingertips. They use an automatic learning technique called self-supervised learning, in which the robots must understand how to do certain things – like taking a rubber duck – by repeatedly trying the task and then using the sensor data ( such as touch sensors on the fingers of a robot) to get better and better.

The research is still in its infancy: Meier said the robots were just starting to search for objects, but had not yet figured out how to get them back. Like babies, who must first learn to use their muscles and limbs before they can move – let alone get up – robots must also follow this discovery process.

Why force a robot to understand these kinds of tasks?

The robot needs to understand what the consequences of its actions are, told FAIR researcher Franziska Meier CNN Business.

"As human beings, we can learn that, but we need to be able to teach a robot to learn that," she said.

In addition, researchers were surprised to find that letting the robots explore while discovering things by themselves could accelerate the learning process.

Some of Facebook's robots are learning to move and grab objects.

Daisy was working in demo mode when I saw her last week in cloudy weather, but she is learning to walk with the help of a self-supervising learning. The six-legged robot, purchased by the researchers, was chosen for its stability, said researcher Roberto Calandra. For example, he started by not knowing anything about the floor on which he was supposed to walk (which included smooth halls inside Facebook as well as other surfaces). Over time, he learns to progress taking into account elements such as balance and positioning using sensors on the legs.

The researchers also gave another robot, consisting of an articulated arm with grasping forceps, the coordinates of a point of space that they wanted to reach, and then it took five hours to get there – making different moves each time more and more enlightened by what he's tried before.

"Every time, basically, it tries something, it gets more data, it optimizes the model, but we also explore," said Meier.

Facebook changes livestream rules after filming in New Zealand

Calandra said that one of the reasons why work on this type of artificial intelligence with robots, rather than using artificial intelligence software on a computer, was because this forced the algorithms to efficiently use the data. That is, they must determine how to perform tasks in days or hours, since they must do so in real time, rather than in software simulations that can be accelerated to mimic a longer period, for example months or years.

"If you already know" Oh, I can just do more simulations, I can do 400 years of simulations "- it's an approach that, yes, would be very interesting scientifically, but it does not apply to the real world, "he said.

Source link