The system uses RFID tags to reach targets; could benefit robotic manufacturing, collaborative drones and other applications – ScienceDaily

A new system developed at MIT uses RFID tags to help robots locate moving objects with unprecedented speed and accuracy. The system could enable greater collaboration and accuracy from robots working on packaging and assembly, as well as drones of drones performing search and rescue missions.

In an article presented next week at the USENIX symposium on designing and implementing networked systems, researchers have shown that robots using the system can locate objects marked in less than 7.5 milliseconds on average and with error less than one centimeter.

In the system, called TurboTrack, an RFID (Radio Frequency Identification) tag can be applied to any object. A reader sends a wireless signal that reflects the RFID tag and other objects nearby, then bounces back to the reader. An algorithm examines all the reflected signals to find the response of the RFID tag. Final calculations then rely on the movement of the RFID tag – even though this usually decreases accuracy – to improve the accuracy of the location.

Researchers say the system could replace computer vision for some robotic tasks. As with her human counterpart, computer vision is limited by what she can see and she may not notice objects in crowded environments. Radio frequency signals are not subject to any such restrictions: they can identify targets without visualization, in clutter or through walls.

To validate the system, researchers attached an RFID tag to a plug and another to a bottle. A robotic arm located the cap and placed it on the bottle, held by another robotic arm. In another demonstration, the researchers followed nanodrones equipped with RFID technology during docking, maneuvering and flying. In both tasks, the system was as accurate and fast as traditional computer vision systems, while operating in scenarios where computer vision failed, researchers report.

"If you use RF signals for tasks typically performed with computer vision, you not only allow robots to do human things, but also do superhuman things," says Fadel Adib, assistant professor and principal investigator at MIT. . Media Lab and founding director of the Signal Kinetics research group. "And you can do it scalable because these RFID tags cost only 3 cents each."

In manufacturing, the system could allow robot arms to be more accurate and versatile, for example to pick up, assemble and pack items along an assembly line. Another promising application is to use portable "nanodrones" for search and rescue missions. Nanodrones currently use computer vision and methods for assembling captured images for localization purposes. These drones are often confused in chaotic areas, get lost behind walls and can not uniquely identify. All this limits their ability, for example, to spread over a given area and collaborate to search for a missing person. Using the researchers' system, nanodrones in swarms could be better localized for greater control and collaboration.

"You can allow a swarm of nanodrones to form in certain ways, to fly in crowded and even hidden environments, with great precision," says lead author Zhihong Luo, a graduate student in the Signal Kinetics research group.

The other co-authors of Media Lab are visiting students Qiping Zhang, post-doc Yunfei Ma and research assistant Manish Singh.

Super resolution

The Adib group has been working for years on the use of radio signals for tracking and identification purposes, such as the detection of bottled food contamination, communication with devices located inside the body and inventory management of the warehouse.

Similar systems have attempted to use RFID tags for location tasks. But these come with compromises in accuracy or speed. To be precise, it can take them several seconds to find an object in motion; to increase speed, they lose in precision.

The challenge was to simultaneously achieve speed and accuracy. To do this, the researchers were inspired by an imaging technique called "super-resolution imaging". These systems assemble images at multiple angles to obtain a finer resolution image.

"The idea was to apply these super-resolution systems to radio signals," says Adib. "When something moves, you have more prospects to follow it, so you can exploit the movement for more precision."

The system combines a standard RFID reader with an "auxiliary" component used to locate radio frequency signals. The wizard broadcasts a multi-frequency signal based on a modulation scheme used in wireless communications called orthogonal frequency division multiplexing.

The system captures all signals that bounce off objects in the environment, including the RFID tag. One of these signals carries a specific signal to the specific RFID tag, because the RFID signals reflect and absorb an incoming signal in a certain pattern, corresponding to bits of 0 and 1, which the system can recognize.

Since these signals travel at the speed of light, the system can calculate a "flight time" (measuring the distance by calculating the time it takes for a signal to move between a transmitter and a receiver), in order to determine the location of as well as other objects in the environment. But this only provides an approximate figure of location, not a precision less than one centimeter.

Leveraged movement

To zoom in on the location of the tag, the researchers developed what they call a "super-space-time resolution" algorithm.

The algorithm combines location estimates for all bounce signals, including the RFID signal, which he determined using flight time. By using probability calculations, it reduces this group to a handful of potential locations for the RFID tag.

As the label moves, the angle of its signal changes slightly – a change that also matches a certain location. The algorithm can then use this change of angle to track the distance of the tag as it moves. By constantly comparing this changing distance measure to all other distance measurements from other signals, he can find the label in a three-dimensional space. All this happens in a fraction of a second.

"The general idea is that by combining these measurements over time and in space, you get a better reconstruction of the beacon position," says Adib.

The work was sponsored, in part, by the National Science Foundation.

Source link