[ad_1]
The little camera on this phone has great power: it can see things that our eyes can not see.
At night, in recent weeks, I frolicked in dark places taking photos using a new Google pixel 800 night vision mode called Pixel 3. Friends in a candlelit bar apparently brought in a team of fixtures. The dark streets are lined with reds and greens. A midnight cityscape illuminates as if it was late afternoon. This goes well beyond an Instagram filter, but you must see this territory.
Night Sight is a great step forward for smartphone photography – and an example of the super-false transformation of our photos.
That's right: you do not look like your photos. Photography has never consisted simply of capturing reality, but the latest phones take more and more pictures in unknown territory.
For now, Night Sight is only one mode that appears in dark shots on Google Pixel phones. But this is not the only case: all kinds of phone manufacturers boast about the beauty of their photos and not their realism. The iPhone's "portrait mode" blurs the backgrounds and identifies facial features to reduce red-eye. The selfies on the popular phones in Asia automatically refine the head, illuminate the eyes and smooth the skin. And the latest phones use a technique called HDR that merges several plans to produce a hyper-tonic version of reality.
When I recently took the same picture at sunset with a iPhone 6 2014 and iPhone XR that year, I was upset with the difference: the most recent plan of the iPhone seemed to have been painted with watercolor.
What is happening? Smartphones have democratized photography for 2.5 billion people, taking a great photo that required special equipment and a user manual.
At present, artificial intelligence and other software advances are democratizing the creation of beauty. Yes beauty Editing photos no longer requires Photoshop skills. From now on, when presented with a panoramic view or a smiling face, the cameras of the phones exploit algorithms formed on what humans like to see and produce adjusted images.
Your phone is equipped with high-tech beer glasses. Think of your camera less as a reflection of reality and more like an artificial intelligence trying to make you happy. It's faketastic.
Taking a picture on a phone has become much more than passing light through a lens on a sensor. Of course, this material is still important and has improved over the last decade.
But more and more, it's the software – not the hardware – that improves our photos. "It's a hyperbole, but it's true," said Marc Levoy, a retired computer science professor at Stanford, who once taught Google's founders, Larry Page and Sergey Brin, and works now for them on camera projects including Night Sight.
Levoy's work is rooted in the limitations inherent to the size of a smartphone. Phones are not compatible with large lenses (and the sensors beneath them) like traditional cameras. Manufacturers have had to find creative ways to compensate. Enter techniques that replace optics with software, such as digitally combining multiple clips into one.
The new phones from Apple, Samsung and Huawei also use it, but "we bet the software on the software and the AI," Levoy said. This Google released to explore new ways to make images.
"Google in terms of software has an advantage," said Nicolas Touchard, vice president of marketing at DxOMark Image Labs, which produces independent evaluations for cameras. (The question of whether this is enough to help the Pixel winner to convert Apple and Samsung is a separate issue.)
With Night Sight, Google's software is at its extreme: it can capture up to 15 shots in low light and blend them to lighten faces, provide crisp detail and saturate colors in an attractive way. No flickering is triggered – this artificially improves the light already present.
Anyone who has tried a half-light picture with a traditional camera knows how hard it is to not take blurry pictures. With Night Sight, even before you press the button, the phone measures the hand shake and movement of the scene to determine the number of shots and how long you keep the shutter release button open. When you press the shutter button, it warns you to stay still and record for up to 6 seconds.
Over the next one or two seconds, Night Sight divides all its shots into a bunch of tiny mosaics, aligning and blending the best elements to get a complete picture. Finally, the AI and other software analyze the image to select colors and tones.
Night Sight had some difficulties with the focus and in scenes almost without light. You and your subject really have to hold this pose. But in most of my tests, the product was fantastic. The portraits smooth the skin while keeping the eyes sharp. The night landscapes illuminated the hidden details and colored them like Willy Wonka's chocolate shop.
The problem is this: how does a computer choose the tones and colors of the things it experiences in the dark? Should he make a starry sky like twilight?
"If we can not see it, we do not know what it looks like," Levoy said. "There are a lot of aesthetic decisions. We made them in a way, you could do them differently. Maybe these phones will eventually need a button "What I see" or "What is really there". "
So if our phones make up colors and lights that we like, does it really matter like a photograph? Or is it a computer generated work of art?
Some purists support this last point. "It's always what happens with disruptive technology," said Levoy.
What does "fake" mean even, he asks. Professional photographers have been making settings for a long time in Photoshop or in a dark room. Before that, film makers had tweaked the colors for a certain look. This could be an academic concern if we did not talk about the hobby – not to mention the memories – of a third of humanity.
Until where will the phones take our photos out of reality? What software can cause us to think seems normal? What parts of images do we leave computers to edit? On a photo I took from the White House (without Night Sight), I noticed that Pixel 3's algorithms driven to smooth out imperfections removed the architectural details still visible in a shot on the camera. IPhone XS.
At DxOMark, the camera measurement firm, the question is how to judge images interpreted by software for features such as the beauty of faces.
"Sometimes the manufacturers push too far. Usually, we say that everything is fine if they have not destroyed the information. If you want to be objective, you have to consider the camera as a device that captures the information, "Touchard said.
For another point of view, I called Kenan Aktulun, the founder of the iPhone Photography Awards. Over the last decade, he has examined more than a million iPhone photos, which deters newcomers from working in depth.
The dividing line between digital art and photography "gets really blurred at some point," said Aktulun. However, he finally welcomes the technological improvements that make the process and tools for creating photos invisible. The attraction of smartphone photography is that it is accessible – a button and you are there. Artificial intelligence is an evolution of this.
"As the technical quality of images has improved, we are looking for an emotional connection," Aktulun said. "Those who attract a lot more attention are not technically perfect. These are pictures that give a glimpse of the life or experience of the person. "
Source link