Watch videos in super slow-mo from a camera with a human vision



[ad_1]

Rather than capturing individual images, event cameras constantly process changes in the intensity of light in a scene. It's not only more economical in terms of data – the static parts of a scene not taking up space – but nothing gets lost between the images, unlike a conventional camera. This allows hints such as high speed photography, improved HDR performance and amazingly fast artificial vision.

The irony of event cameras is that they work like our eyes, but you can not display raw data on a screen. First, it must be "reconstructed" into a viewable video, at the origin of the new research. The Eth Zurich team first formed a neural network using simulated event data, instead of trying to manually extract images. Then they used the results to create a high speed network to process the event camera data in real time.

The result is a superior image quality of 20%, all in real time. In addition, they were able to generate a high-speed video at 5,400 fps, capturing a ball striking a ceramic gnome, for example. In another example, the algorithm produced high dynamic range video under difficult lighting conditions.

The team presented some practical applications that were not part of their work, such as object recognition (road signs and traffic lights) and depth recognition without binocular vision. Research seems particularly promising for autonomous driving and artificial vision, allowing vehicles to see and detect objects without processing large amounts of data. The technology is still in its infancy, but they have released their rebuild code and a pre-workout template so that other researchers can try it out.

[ad_2]

Source link