[ad_1]
July 9, 2021 by Sarah Yang
Delivery services may be able to weather the snow, rain, heat and twilight of the night, but a new class of legged robots is not far behind. Artificial intelligence algorithms developed by a team of researchers from UC Berkeley, Facebook and Carnegie Mellon University endow legged robots with an increased ability to adapt and navigate unfamiliar terrain in time real.
Their test robot successfully traversed sand, mud, hiking trails, tall grass, and piles of dirt without falling. It also outperformed alternative systems by accommodating a weighted backpack thrown over the top or slippery, oily inclines. Descending steps and climbing over piles of cement and pebbles, he achieved success rates of 70% and 80% respectively, which remains an impressive feat given the lack of simulation or experience calibrations. prior to unstable environments.
Not only could the robot adapt to new circumstances, it could also do so in fractions of a second rather than minutes or more. This is essential for practical deployment in the real world.
The research team will showcase the new AI system, called Rapid Motor Adaptation (RMA), next week at the 2021 Robotics: Science and Systems (RSS) conference.
“Our idea is that change is pervasive, so from day one RMA policy assumes that the environment will be new,” said Jitendra Malik, principal investigator of the study, professor in the Department of Electrical Engineering and computer science at UC Berkeley and researcher at UC Berkeley. the Facebook group AI Research (FAIR). “This is not an afterthought, but foresight. It’s our secret sauce.
Previously, legged robots were typically pre-programmed for the likely environmental conditions they would encounter or taught through a mixture of computer simulations and hand-coded policies dictating their actions. It could take millions of trials – and errors – and fall short of what the robot might face in reality.
“Computer simulations are unlikely to capture everything,” said lead author Ashish Kumar, a PhD at UC Berkeley. student in Malik’s lab. “Our RMA-compatible robot shows strong performance in adapting to environments never seen before and learns this adaptation entirely by interacting with its environment and learning from experience. It’s new.
The RMA system combines a basic policy – the algorithm by which the robot determines how to move – with an adaptation module. Basic policy uses reinforcement learning to develop controls for sets of extrinsic variables in the environment. This can be learned in simulation, but it alone is not enough to prepare the legged robot for the real world, because the robot’s on-board sensors cannot directly measure all the possible variables in the environment. To solve this problem, the adaptation module asks the robot to learn about its surroundings using information based on its own body movements. For example, if a robot feels that its feet are extending further, it can assume that the surface it is on is soft and will adapt its next movements accordingly.
The base policy and adaptation module are executed asynchronously and at different frequencies, allowing RMA to operate robustly with only a small on-board computer.
The other members of the research team are Deepak Pathak, assistant professor in the School of Computer Science at CMU, and Zipeng Fu, master’s student in Pathak’s group.
The RMA project is part of an industry-university collaboration with the FAIR group and the Berkeley AI Research (BAIR) laboratory. Prior to joining CMU faculty, Pathak was a research fellow at FAIR and a visiting scholar at UC Berkeley. Pathak also obtained his doctorate. degree in electrical and computer engineering from UC Berkeley.
Learn more:
Source link