[ad_1]
What do you do when an experiment is too fast for even the fastest cameras in the world to see it?
For a trio of California Institute of Technology researchers, the answer was simple: build a camera faster.
Previously, the fastest video cameras in the world had frame rates of one hundred billionth of a second. It was fast – A hundred billionth of a second is just the time needed for a beam of light to travel the length of a sesame seed. But it was not fast enough.
Researchers working with advanced lasers have developed a technique called "time focusing" in which a laser pulse can be triggered over extremely short and compressed time periods. The entire beam of light would rush at the same time, and researchers knew that time-focused lasers behaved differently than lasers emitted over longer periods. [10 Real-Life Superhero Technologies]
But existing cameras were too slow to study. There were ways to get around this problem in other super-fast experiments. The researchers sometimes performed the same experiment over and over in front of the same camera, too slow, until he had collected enough different action frames to form a single complete movie. This would not work to crush a compressed laser on a surface like frosted glass; the researchers wanted to see what it looked like, but they knew it would be different each time. There was no way to chain several experiments into one movie.
The three scientists have therefore developed a technology called compressed ultra-fast photography (T-CUP), with a resolution of 10 trillion frames per second. One hundred times faster than the previous fastest recording method, T-CUP combines film data with data from a still image. As researchers have described in an article published August 8 in the journal Nature, T-CUP divides the laser image into two devices: a motion recorder and a camera that performs a single exposure of the scene. The video camera records the scene at the limit of what is possible to see. The camera fixes a single photo of the entire laser movement.
Then a computer combines the data from both cameras using the stained image of the still camera to fill the gaps in the film. The result? A video of 450 x 150 pixels with a duration of 350 images.
Originally posted on Live Science.
[ad_2]
Source link