New MIT robot can identify sight and touch objects



[ad_1]

The team took a KUKA robot arm and added a touch sensor called GelSight, created by Ted Adelson's group at CSAIL. The information collected by GelSight was then passed on to an AI so that she could understand the relationship between visual and tactile information.

To teach the AI ​​to identify objects to the touch, the team recorded 12,000 videos of 200 objects such as fabrics, tools and household objects affected. The videos were broken down into still images and the AI ​​used this dataset to connect tactile and visual data.

"By observing the scene, our model can imagine the sensation of touching a flat surface or sharp edge," says Yunzhu Li, PhD student in CSAIL and lead author of a new paper on the system. "By blindly touching ourselves, our model can predict interaction with the environment only from tactile feelings." Bridging these two senses could give more power to the robot and reduce the data we might need for tasks of manipulation and seizure of objects. "

For the moment, the robot can only identify objects in a controlled environment. The next step is to create a larger data set for the robot to work in more diverse environments.

"Methods such as this one can be very useful for robotics, where you have to answer such questions as" Is this object hard or soft? "Or" if I lift this cup by its handle, what will be my grip? "," Says Andrew Owens, a postdoctoral researcher at the University of California at Berkeley. "It's a very complex problem because the signals are so different and this model has demonstrated a great ability."

[ad_2]

Source link