Google's Pixel 3 camera rewrites the rules of the photo with new zoom and raw images



[ad_1]

Every digital camera is defective. Image sensors do not capture light perfectly, lenses distort images, and photos often seem so poor compared to what you remember seeing.

But Google, with its smartphones Pixel 3 and Pixel 3 XL, has found new ways to use software and hardware to overcome these defects and get better images. His Pixel and Pixel 2 phones were already at the cutting edge of smartphone photography, but Pixel 3 goes even further.

The Pixel 3 camera stands up well to Apple's iPhone XS though a camera is attached in the back. It is content to do without the camera flash to use instead the new low-light shooting capabilities. And it offers enthusiasts a radically new variety of raw images that opens up photographic flexibility and artistic freedom.

All this is possible thanks to a domain called digital photography, a term coined in 2004 by Marc Levoy, Google's distinguished engineer, while he was at Stanford, before moving full-time to Google Research. The era of photography was that of glass lenses and film chemistry. First-generation digital cameras that closely follow the analog approach are in rapid decline.


Reading in progress:
Look at this:

Comparison between the iPhone XS and Pixel 3 camera


5:05

Now, our cameras depend as much on computers as on optics. And what we have seen so far is just the beginning.

Here's what it means specifically for Google's Pixel 3 and its bigger brother, the Pixel 3 XL.

Super Res Zoom to push these pixels

The term "digital zoom" has a bad reputation because you can not just say "improve", zoom in on an image and expect to see new details that were not captured at first.

That's why it's worth paying extra for optical zoom methods – especially the second (or third or fourth) camera in the phones of companies such as Apple, Samsung and LG Electronics. The Pixel 3 comes with a feature called Super Res Zoom that introduces a new way to capture details in the first place. The result is that Google's unique main camera has an image quality that "comes very, very close" to a second camera zoomed optically twice as far, Levoy said.

The Super Resolution zoom function of Google Pixel 3, used to take the picture on the left, is "very close" to the picture quality of a picture taken with a camera with a zoom 2X optics, says Google. The photo on the right is taken with an iPhone XS Max at 2X and both are enlarged to 100%.

Stephen Shankland / CNET

Here's how it works – but first, attach some background to the bowels of digital cameras.

All image sensors have a matrix that records the intensity of the light that each pixel sees. But to record colors as well, camera manufacturers place a checkerboard pattern in front of each pixel. This Bayer filter, invented at Eastman Kodak in the 1970s, means that each pixel records red, green, or blue, the three colors from which digital photos are built.

This photo switches between a Super Res zoom picture taken with a Pixel 3 and a regular digitally zoomed picture of 2X with a Pixel 2.

Google

A problem with the Bayer filter is that the cameras must be data so that each pixel has the three colors – red, green and blue – and not just one of them. This mathematical process, called demosaicing, means that you can see and edit a photo, but it's a computer guessing how well it is to fill in pixel-by-pixel color detail.

Super Res Zoom brings together more information in the first place. It combines several shots and counts on your imperfectly stable hands to slightly move the phone so that it can collect red, green and blue color data – the three colors – for each element of the scene. If your phone is on a tripod, the Pixel 3 will use its optical image stabilizer to artificially move the view, Levoy said.

The result: sharper lines, better colors and no demosaicing. This gives the Pixel 3 a better basis for digital zoom.

Those who take photos at the camera's natural focal length might also wish for additional quality, but Super Res Zoom only works from a zoom of 1.2X or more, a said Levoy. Why not at 1X zoom? "It's a performance," he says. Super Res Zoom slows down taking pictures and takes more power.

And Super Res Zoom does not work with video either. Therefore, if you want to use the telephoto lens, a second camera may still pay off.

New gross calculation for flexible photos

More than a decade ago, a generation of digital photography enthusiasts and professionals discovered the power of shooting with the raw photo format of a camera: data taken directly from the camera's sensor. Picture without additional treatment. Google's Pixel 3 smartphones could also extend this revolution to mobile phones.

Android phones have been able to take raw footage since 2014, when Google added support for Adobe's DNG (Digital Negative) file format to save unprocessed data. But the limitations of smartphone image sensors have hindered technology.

With a DSLR or a mirrorless camera with a large sensor, shooting in raw size offers many benefits if you're ready or wanting to get your hands dirty with photo editing software like Adobe Lightroom. Indeed, the "cooking" of a JPEG file locks many decisions regarding color balance, exposure, noise reduction, sharpness and other attributes of the image . The raw shoot gives the photographers control of it all.

Raw, however, has been a little behind on mobile phones because the tiny image sensors in the phones are affected by loud noise and low dynamic range, or by the ability to capture both the highlights and the troubled details in the shadows. Today, advanced cameras avoid the problem by combining multiple shots into a single high dynamic range (HDR) image. Google's approach, HDR +, merges up to nine underexposed frames, an approach that Apple has imitated with its new iPhone XS and XS Max.

The Pixel 3 camera merges several shots and applies other tips to create a single "raw computing" photo file that produces less noise and better colors than the standard raw file taken left with the # 1 file. Adobe Lightroom app. To be fair, Adobe also offers an HDR option and its noisier image also retains some details.

Stephen Shankland / CNET

With the Pixel 3 camera, the Google Camera app can now also take pictures in the raw state, except that it applies to the camera. First, Google's special HDR sauce. If you enable the DNG setting in the Camera application settings, the Pixel 3 will create an already processed DNG for items such as dynamic range and noise reduction without losing the flexibility of a raw file .

"Our philosophy with raw is that there should be no compromise," Levoy said. "We use Super Res Zoom and HDR + on these files, the dynamic range is amazing."

There are still limits. If you zoom in with the camera, your JPEG images will have more pixels than your DNG images. For JPEG files, the Pixel 3 zooms with a combination of Google's RAISR AI technology and the more traditional Lanczos algorithm, explained Levoy, but for raw images you will need to do the digital zoom you -even.

Another disadvantage of Pixel 3 raw: although Google can use the rich color data of Super Res Zoom to bypass the demosaicing, most photo processing software can only handle raw files that have not yet been démolisés. As a result, Pixel 3 provides a Bayer model DNG file.

"Pixel camera JPEG files can actually be more detailed than DNGs in some cases," Levoy said.

Google's images also benefit from increased momentum thanks to an image sensor that outperforms Pixel 2, said Isaac Reynolds, Product Manager for Pixel Camera at Google.

See in the dark with Night Sight

All Pixel models use HDR + by default to produce images with good dynamic range. The Pixel 3 will go one step further with an adjustment of the technology called Night Sight for shooting in the dark, although the feature is not available for a few weeks, Google said.

"Night vision is HDR + on steroids," said Levoy, taking up to 15 images in less than a third of a second. The camera combines these multiple images into one shot and handles operations such as image alignment and avoidance of "ghost" artifacts caused by different details between images.

A 1/3 second exposure is quite long, even with optical image stabilization. To avoid problems, the Pixel 3 uses the "motion metering," which monitors the camera's images and gyroscope to reduce the shutter speed when blur is a problem for the camera. camera or subjects.

"In practice, this takes detailed pictures," Reynolds said.

Google also had to find a new way to determine the appropriate white balance. Correct hues that a photo may have depending on lighting conditions, such as daytime shade, fluorescent light bulbs or sunset. Google is now using artificial intelligence technology to adjust the white balance, said Levoy.

The company plans to make this feature available in the More menu of the camera app, but could also make Night Sight more accessible, Reynolds added. "We realize that it could be a pain, that you could forget it when you are in a very dim light," he said. "There will be an easier way to get in."

AI brains for portraits and more

Last year, Pixel 2 was the first Google phone to integrate the Visual Core Pixel, a processor designed by Google to speed up artificial intelligence tasks. The Pixel 3 is also endowed with an artificial intelligence booster. This year, Google is using it for new shots.

Visual Core Pixel helps HDR + to play a vital role in the purpose of the Camera Application, which allows you to search from a photo or to recognize a phone number to compose.

A photo taken with the Google Pixel 3 XL portrait mode.

Stephen Shankland / CNET

And it plays an important role in this year's updated portrait mode, which mimics the possible background blur with conventional cameras capable of filming with shallow depth of field. Apple was the first to use portrait mode using two cameras to calculate the distance that separates it from the camera elements of a scene. Google has done with a camera and a "dual pixel" image sensor producing similar depth information.

But now, Google is doing all this with artificial intelligence, better analyzing the depth information.

"The background will be more uniformly defocused, especially for subjects at a medium distance, such as at a distance of 5 to 10 feet," Levoy said.

Another advantage of AI: Google can further train the system for better results and send them in software updates. And Google does not just train the system on faces, said Levoy. "Our method based on learning depth from two pixels works on all scenes, including flowers, especially flowers!"

The Pixel 3 incorporates the depth information into its JPEG file so that you can then change the depth and focus point in the Google Photos app.

Artificial intelligence is also part of Top Shot, the function triggered when the camera detects faces, then tries to select a winner in a sequence. He was trained on a database of 100 million images with smiling people, showing surprise and not blinking.

The new power of the chips also allows the pixel to detect the location of human faces and bodies and lighten them slightly for a nicer photo, said Reynolds.

"We have dubbed this synthetic filling," he said. "It emulates what a reflector could do", referring to the reflective materials that portrait and product photographers use to project more light onto a photo subject.

Our future in computational photography

It is clear that digital photography goes further and further in all smartphone cameras. The term has reached such a level that Apple's chief marketing officer, Phil Schiller, mentioned it during the launch of the iPhone XS in September.

But only one company actually employs the guy who coined the term. Levoy is modest about this, pointing out that the technology has expanded well beyond his research.

"I invented the words, but I do not own them anymore," he said.

He has plenty of other ideas. He is particularly interested in detailed information.

For example, knowing how far away are certain parts of a scene could enhance this synthetic flash function or let Google adjust the white balance of neighboring parts of a scene in blue shadows and more distant areas under a darker sun.

So you should expect more from Pixel 4 or whatever Levoy, Reynolds and their colleagues are currently working on.

"We have just started scratching the surface," said Levoy, "with what computer photography and AI have done to improve basic shooting with a single press."

NASA is 60 years old: the space agency has pushed humanity further than anyone and plans to go further.

Take it to the extreme: mix crazy situations – eruptive volcanic eruptions, nuclear collapses, 30-foot waves – with everyday technologies. This is what happens.

[ad_2]
Source link