[ad_1]
A robotic hand? Four autonomous fingers and one thumb capable of doing everything your flesh and blood can do? But in the world's largest artificial intelligence labs, researchers are increasingly coming closer to creating robotic hands capable of mimicking the real
The Spinner
Inside OpenAI, l & # Artificial San Francisco. Intelligence lab founded by Elon Musk and several other big names in Silicon Valley, you will find a robotic hand called Dactyl. This is very similar to the mechanical prosthesis of Luke Skywalker in the latest Star Wars movie: mechanical figures that bend and straighten out like a human hand.
If you give Dactyl an alphabet block and ask him to show you some special letters – say the red O, the orange P and the blue I – he will show them and turn, turn and will return the toy in an agile way.
For a human hand, it's a simple task. But for an autonomous machine, it's a notable achievement: Dactyl learned the task largely by himself. Using mathematical methods that allow Dactyl to learn, researchers believe that they can train robotic hands and other machines to accomplish even more complex tasks.
This remarkably agile hand represents a huge jump in robotics research in recent years. Until recently, researchers were still struggling to master much simpler tasks with much simpler hands
The Gripper
Created by researchers at Autolab, a robotics laboratory of the At the University of California at Berkeley, this system represents the limitations of technology. Only a few years ago
Equipped with a two-finger "gripper", the machine can pick up items like a screwdriver or pair of pliers and sort them into bins.
The forceps is much easier to control than a five-fingered hand, and building the software needed to operate a gripper is not as difficult.
He can handle objects that are unfamiliar. He may not know what a restaurant-style ketchup bottle is, but the bottle has the same basic shape as a screwdriver – something that the machine knows.
But when this machine is confronted with something different from what it's seen before – like a plastic bracelet – all bets are disabled.
The Picker
What you really want is a robot that can pick up anything, even things that it has never seen before. This is what other Autolab researchers have built in recent years.
This system still uses simple hardware: a pliers and a suction cup. But he can pick up all kinds of random objects, from a pair of scissors to a plastic toy dinosaur.
The system enjoys spectacular advances in machine learning. Berkeley researchers have modeled the physics of more than 10,000 objects, identifying the best way to take each one. Then, using an algorithm called a neural network, the system analyzed all these data, learning to recognize the best way to pick up any element . In the past, researchers had to program a robot to perform each task. Now he can learn these tasks alone.
When confronted, say, with a plastic toy Yoda, the system recognizes that he should use the pliers to pick up the toy.
But when he faces the bottle of ketchup, opts for the suction cup.
The picker can do it with a trash can full of random stuff. This is not perfect, but because the system can learn alone, it improves much faster than the machines of the past
The Bed Maker
This robot may not be perfect for hospitals, but it represents notable progress. Berkeley researchers gathered the system in just two weeks, using the latest machine learning techniques. It was not long ago, it would have taken months or years.
Now, the system can learn to make a bed in a fraction of that time, just by analyzing the data. In this case, the system analyzes the movements that lead to a made bed.
The Pusher
On the Berkeley campus, in a lab called BAIR, another system applies other learning methods. He can push an object with a gripper and predict where he will go. It means that he can move toys on a desk as much as you or me.
The system learns this behavior by analyzing large collections of video images showing how objects are pushed. In this way, he can handle the uncertainties and unexpected movements that accompany this kind of task.
The Future
These are all simple tasks. And machines can only handle them under certain conditions. They fail as much as they impress. But the machine learning methods that guide these systems point to continued progress in the coming years.
Like those of OpenAI, researchers at the University of Washington form robotic hands that have the same fingers and joints as our hands.
It's much more difficult than driving a pliers or a suction cup. An anthropomorphic hand moves in so many different ways.
Washington researchers are therefore training in simulation – a digital reconstruction of the real world. This simplifies the training process.
At OpenAI, researchers form their Dactyl hand in the same way. The system can learn to rotate the block of the alphabet through what would have been 100 years of trial and error. Numerical simulation, performed on thousands of computer chips, reduces all this learning time to two days.
He learns these tasks by repeated trial and error. Once he has learned what works in simulation, he can apply this knowledge to the real world.
Many researchers wondered if this type of simulated training was going to be transferred to the physical realm. But like Berkeley researchers and other labs, the OpenAI team has shown that it could.
They introduce a certain amount of chance into the simulated training. They change the friction between the block hand. They even change the simulated gravity. After learning how to handle this randomness in a simulated world, the hand can handle the uncertainties of the real world.
Today, all that Dactyl can do is rotate a block. But researchers are exploring how these same techniques can be applied to more complex tasks. Think manufacturing. And flying drones. And maybe even driverless cars.
Source link