Video Friday: Flying robot inspired by insects, and more



The TU Delft MAVLab has for years been running its DelFly beating robots, and the most impressive to date:

Insects are among the most agile natural insects. Assumptions about their flight control can not always be validated by experiments on animals or attached robots. To that end, we have developed a stand-alone, programmable and agile autonomous flying robot, controlled by bio-inspired movement changes from its flapping wings. Despite being 55 times bigger than a fruit fly, the robot can accurately mimic fast flier avoidance maneuvers, including correcting for yaw rotation towards the head of the flies. ;evacuation. The yaw control of the robot being deactivated, we have shown that these yaw rotations result from a passive translation-induced aerodynamic coupling between the yaw torque and the rolling and pitching torques produced throughout the maneuver. The robot allows new methods to study the flight of animals and its flight characteristics allow real flight missions.

[[[[Science ]via [ MAVLab ]

In a new paper, researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) say they have achieved key development in this area: a system that allows robots to inspect random objects and understand enough to accomplish specific tasks. tasks without ever having seen them before.

The system, called the Heavy Object Network (DON), considers objects as sets of points that serve as visual road maps. This approach allows robots to better understand and manipulate objects and, most importantly, to capture a specific object among others, a valuable skill for the types of machines used by companies like Amazon and Walmart in their warehouses.

The team formed the system for it to consider objects as a series of points constituting a larger coordinate system. He can then map different points to visualize the 3D shape of an object, in the same way that the panoramic photos are assembled from several photos. After training, if a person specifies a point on an object, the robot can take a photo of that object and identify and match the points so that they can then pick up the object at that specified point. This is different from systems such as UC-Berkeley DexNet, which can capture many different elements, but can not satisfy a specific request. Imagine an 18-month-old child who does not understand with what toy you want him to play, but who can still take a lot of objects, compared to a four-year-old who can answer "go get your truck burnt out . "

In the future, the team hopes to upgrade the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, such as learning to grab an object and moving it for the ultimate purpose of cleaning an office.

[ MIT ]

We demonstrated high speed capture and no deformation of a marshmallow. Marshmallow is a very soft object, difficult to grasp without deforming its surface. For retrofit, we developed a 1 ms sensor fusion system with the high-speed active vision sensor and the high-speed, high-precision proximity sensor.

Now, they need a high speed robot hand that can stuff these marshmallows into my mouth. NAME!

[ Ishikawa Senoo Laboratory ]

We have not heard of I-Wei Huang, aka Crab Fu. Or more than a little. He was one of the steampunk manufacturers that we saw in this article from 10 years ago! It's good to see one of his unique steam robot characters.

Toot toot!

[ Crab Fu ]

You do not have to look at the five minutes you've spent here, just the moment a 4-meter tall remote-controlled robot saws a log in half.

Also the bit in the end is cute.

[ Mynavi ]

A teaser for what appears to be one of the longest robotic arms we've ever seen.

[ Suzimori Robotics Lab ]

It's a bit dizzying, but that's what happens when you put a 360-degree camera on a racing drone.

[ Team BlackSheep ]

Uneven Robot 3-2 gym bar certainly loads quickly, but it takes a little work on this disassembly.

[ Hinamitetu ]

This video shows the latest results obtained by the dynamic interaction control laboratory of the Italian Institute of Technology on the walking and manipulated manipulation of humanoid robots. We have integrated the iCub walk algorithms into a new teleoperation system, allowing a human to teleoperate the robot during locomotion and manipulation tasks.

Also, do not forget this:

[ Paper ]

AEROARMS is one of the most incredible and ambitious projects in Europe. In this video, we explain what is AEROARMS. This is part of the scientific program H2020 of the European Commission. It was funded in 2015 and this year (2018) will be completed in September.

Yeah, all drones should come with a pair of cute little arms.

[ Aeroarms ]

There is an IROS workshop on the "falling humanoid robot", so of course they had to make a promotional video:

I still remember how awesome it was when CHIMP managed to get up in the DRC.

[ Workshop ]

New at Sphero: Just like round, except now with a display!

It's $ 150.

[ Sphero ]

This video shows a human-sized biped robot, dubbed Mercury, with passive ankles, resting solely on the actuation of the hip and knee for balance. Unlike humans, having passive ankles forces Mercury to find a balance by taking continuous steps. This ability is not only very difficult to achieve, but also allows the robot to respond quickly to disturbances such as those produced when walking around humans.

[ UT Austin ]

Mexico was the partner country of the Hannover Fair in 2018 and KUKA had a strong team in Mexico. It was therefore a perfect excuse to associate with QUARSO on a new use of robots serving as windows on the virtual world of the Mexican pavilion.

A little Bot & Dolly-ish, yes?

[ Kuka ]

For the first time ever, take a video tour of the award-winning robotics and mechanisms laboratory (RoMeLa), led by its director, Dr. Dennis Hong. Meet a variety of robots that play football, climb the walls and even run the 8-Clap.

[ RoMeLa ]

This week's CMU RI seminar is from Matthew O'Toole, CMU, on "Imaging the World One Photon at a Time".

The heart of a camera and one of the pillars of computer vision is the digital photodetector, a device that forms images by collecting billions of photons in the physical world and for the purpose from a camera. While photodetectors used by mobile phones or professional DSLR cameras are designed to aggregate as many photons as possible, I will be talking about a different type of sensor, called SPAD, designed to detect and time stamp each photonic event. By developing computational algorithms and hardware systems around these sensors, we can perform new feats in imaging, including the ability (1) to imagine the propagation of light at trillions of frames per second; counting photons, and (3) reconstruct the shape and reflectance of hidden objects in view.

[ CMU RI ]
[ad_2]
Source link