How does Google Night Sight work and why is it so good?



[ad_1]

By reading all the praises of Google's new low-light Night Sight photography feature for Pixel phones, you would have forgotten to think that Google had just invented the color film. In fact, night-time modes of photography are not new and many of the underlying technologies go back many years. But Google has done a remarkable job of combining its computer imaging prowess with its unprecedented machine learning strength to push the possibilities beyond anything seen on a mobile device. We will examine the history of low light capture multi-image photography, its likely use by Google and speculate on what artificial intelligence brings to the party.

The challenge of low light photography

Long exposure star trails in Joshua Tree National Park, filmed with a Nikon D700. Image of David Cardinal.All cameras struggle in low light scenes. Without enough photons per pixel of the scene, noise can easily dominate an image. If you let the shutter open longer to collect enough light to create a workable image, the amount of noise also increases. Perhaps worse, it is also difficult to keep a sharp image without a stable tripod. Increasing amplification (ISO) will make a picture brighter, but will also increase noise at the same time.

Larger pixels, usually found in larger sensors, are the traditional strategy for solving the problem. Unfortunately, the phone's camera sensors are tiny, which gives small photo sites (pixels) that work well under beautiful lighting but fail quickly as light levels decrease.

The camera designers for phones have two options to enhance the dimly lit images. The first is to use multiple images that are then combined into one less noisy version. The SRAW mode of the DxO ONE extension for iPhone was one of the first applications of this mobile device accessory. It has merged four RAW images to create an improved version. The second is to use intelligent post-processing (with recent versions often powered by machine learning) to reduce noise and improve the subject. Google's Night Sight uses both of these elements.

Multi-image, Single-Capture

Now we are all used to having our phones and cameras combine multiple images into one, primarily to improve the dynamic range. Whether it's a traditional exhibition set in square brackets, such as the one used by most companies, or Google's HDR +, which uses multiple short-lived images, the result can be a superior final image – if the artifacts caused by the fusion of multiple images of a moving scene can be minimized. To do this, generally choose a basic image that best represents the scene, then merge the useful parts of the other images to enhance the image. Huawei, Google and others have also used this same approach to create better resolution telephoto shots. We recently found how important it was to choose the right base frame, since Apple explained that its snafu "BeautyGate" was a bug that resulted in choosing the wrong base frame in the captured footage.

So it makes sense that Google, in essence, has combined these uses of multi-image capture to create better, poorly lit images. In doing so, it relies on a series of intelligent innovations in imaging. It is likely that Marc Levoy's Android application, SeeInTheDark, and his 2015 article on "Extreme Imaging Using Mobile Phones" have been behind this effort. Levoy has been a pioneer in computer imaging at Stanford and is now a distinguished engineer working on camera technology for Google. SeeInTheDark (a sequel to its previous SynthCam iOS app) used a standard phone to accumulate images, matching each image to the accumulated image, and then performing various stages of noise reduction and enhancement. image to produce a remarkable low-end lighting. picture. In 2017, Google engineer Florian Kanz was inspired by some of these concepts to show how a phone could be used to create professional-quality images even in very low light conditions.

Stacking multiple low-light images is a well known technique

Photographers stack multiple frames to improve low light performance since the beginning of digital photography (and I guess some have even done it with film). In my case, I started doing it by hand, then I used a clever tool called Image Stacker. As the first DSLRs were useless with high ISOs, the only way to get great night shots was to take multiple images and stack them. Some classic shots, such as star trails, were initially better captured this way. Nowadays, the practice is not very common with DSLRs and without mirrors, because the current models offer excellent native performances in terms of high sensitivity noise and long exposure. I can leave the shutter open on my Nikon D850 for 10 to 20 minutes while still getting very usable photos.

It is therefore logical that phone manufacturers do the same and use similar technology. However, unlike patient photographers shooting star trails with the help of a tripod, the average phone user wants instant gratification and almost never uses a tripod. Thus, the phone has to make sure that the catch in low light is carried out rather quickly and also minimizes the blur caused by the camera shake – and ideally even by the movements of the subject. Even the optical image stabilization of many high-end phones has its limitations.

I'm not sure that the phone maker first used multiple image capture to improve low light, but the first one I used was the Huawei Mate 10 Pro. Its Night Shot mode takes a series of images over a period of 4 to 5 seconds, then merges them into a final photo. Since Huawei lets the preview in real time, we can see that it uses several different exposures during this period, essentially creating several images in square brackets.

In his document on the original HDR +, Levoy explains that the alignment of several exposures is more difficult (which is why HDR + uses many frames exposed in the same way). It is therefore likely that Google's Night Sight software, like SeeInTheDark, also uses a series of frames. with identical exposures. However, Google (at least in the preliminary version of the application) does not leave the image in real time on the phone screen, so this is a speculation on my part. Samsung has used a different tactic in the Galaxy S9 and S9 +, with a dual-aperture main focus. It can switch to an impressive f / 1.5 in low light to improve the picture quality.

Comparison of Huawei's and Google's low light camera capabilities

I do not have a Pixel 3 or a Mate 20 yet, but I have access to a Mate 10 Pro with Night Shot and a Pixel 2 with a preliminary version of Night Sight. So I decided to compare myself. During a series of tests, Google has clearly outperformed Huawei, with less noise and sharper images. Here is a test sequence to illustrate:

Paint in the light of day with Huawei Mate 10 Pro

Paint in the light of day with Huawei Mate 10 Pro

Paint in daylight with Google Pixel 2

Paint in daylight with Google Pixel 2

Without night mode, here's what you get by shooting the same scene in the dark with the Mate 10 Pro. He chooses a shutter time of 6 seconds, which appears in the blur.

Without Night Shot Mode, here's what you'll get in near-darkness with the Mate 10 Pro. He chose a shutter time of 6 seconds, which appears in the blur.

A version shot almost in the dark with Night Shot on the Huawei Mate 10 Pro. EXIF data indicates a total exposure time of ISO 3200 and 3 seconds.

A version shot almost in the dark with Night Shot on the Huawei Mate 10 Pro. EXIF data indicates a total exposure time of ISO 3200 and 3 seconds.

The same scene uses Night Sight (preliminary version) on a pixel 2. More precise and slightly sharper colors. EXIF data indicate ISO5962 and 1/4 s for the shutter time (probably for each of the many images)

The same scene uses Night Sight (preliminary version) on a pixel 2. More precise and slightly sharper colors. EXIF data indicate ISO5962 and 1/4 s for the shutter time (probably for each of the many images). Both images have been recompressed to a smaller overall size for use on the Web.

Is machine learning part of the secret sauce Night Sight?

Given the length of the images stacked and the number of camera makers and phones that have used a version, it is legitimate to wonder why Google Sight Night seems to be so much better than everything else. First, even the technology used in Levoy's original document is very complex. The years Google has had to keep improving should give them a reasonable head start on anyone. But Google also said that Night Sight uses machine learning to choose the appropriate colors for a scene based on the content.

It's pretty cool, but also pretty vague. It is unclear whether to segment individual objects in such a way that they know that it must act of a color consistent, properly coloring known objects or globally recognizing a type of scene as do intelligent self-exposure algorithms and deciding how such scenes should be placed. look generally (green foliage, white snow and blue sky for example). I am sure that when the final version is available and the photographers have more experience, we will learn more about this use of machine learning.

The initial calculation of the exhibition is another place where machine learning could have been useful. The basic HDR + technology behind Night Sight, documented in Google's SIGGRAPH article, is based on a set of hand-tagged data from thousands of sample scenes to help determine the right exposure to use. . This would seem to be an area in which machine learning could bring some improvements, notably by extending the calculation of exposure to very low light conditions where objects in the scene are noisy and difficult to discern. Google is also experimenting with the use of neural networks to improve the image quality of phones. It is therefore not surprising to start using some of these techniques.

Whatever the combination of these techniques used by Google, the result is certainly the best camera mode in low light on the market. It will be interesting for the Huawei P20 family to discover if it has been able to bring its own Night Shot feature closer to Google.

Now read: Best Android Phones for Photographers in 2018, Mobile Photo Workflow: Pushing the Limits with Lightroom and Pixel, and LG V40 ThinQ: How 5 Cameras Push the Limits of Phone Photography

[ad_2]
Source link