[ad_1]
Light is what is faster in the universe. It is therefore difficult to catch it in motion. We had some success, but a new platform built by Caltech scientists can capture 10,000 billion frames per second, which means it can capture light during its journey . It plans to accelerate it a hundred times faster.
Understanding how light moves is fundamental in many areas, so it's not just curiosity that motivates the efforts of Jinyang Liang and her colleagues – not that it would not be a problem. But there are potential applications in physics, engineering, and medicine that rely heavily on the behavior of light at scales so small and so short that they are at the limit of what can be measured.
You may have heard of billions and billions of FPS cameras in the past, but it was probably "serial" cameras that cheat a little bit to get those numbers .
If a light pulse can be perfectly reproduced, you can send one every millisecond but shift the capture time of the camera by an even smaller fraction, such as a handful of femtoseconds (a billion times more short). You would capture an impulse when she was here, the next when it was a little further, the next when it was even further, and so on. The end result is a film that can not be distinguished in many ways if you captured this first high-speed pulse.
This is very effective – but you can not always rely on the ability to produce a light pulse a million times in exactly the same way. Perhaps you need to see what happens when it passes through a carefully crafted laser engraved lens that will be altered by the first impetus that hits it. In such cases, you need to capture this first pulse in real time, which means recording images not only with femtosecond precision, but only with femtoseconds. a part.
This is what the T-CUP method does. It combines a continuous scanning camera with a second static camera and a data collection method used in tomography.
"We knew that using only a femtosecond scanning camera, the quality of the image would be limited. So, to improve this, we have added another camera that acquires a static image. Combined with the image acquired by the femtosecond scanning camera, we can use what is called a radon transformation to get high quality images while recording ten billion frames per second. " , said the study's co-author, Lihong Wang. This clarifies things!
In any case, the method captures images – technically spatio-temporal data wells – at only 100 femtoseconds apart. It's $ 10 trillion per second, or it would be if they wanted to run it that long, but there was no storage array fast enough to write 10,000 trillion cubic feet of data per second. Thus, they can keep it running only for a few frames in a row – 25 during the experiment shown here.
These images show a long laser pulse of a femtosecond crossing a beam splitter. Note that on this scale, the time required for the light to pass through the lens itself is not trivial. You must take into account these elements!
This level of accuracy in real time is unprecedented, but the team is not yet complete.
"We are already seeing opportunities to increase the speed up to a quadrillion (1015) images per second! ", Is excited Liang in the press release. Capturing the behavior of light at this scale and with this level of fidelity far exceeds what we were able to do just a few years ago and could open up new fields or fields of investigation in physics and exotic materials. .
The article by Liang et al. Was published today in the newspaper Light.
Source link