[ad_1]
During a hardware event in early October, Google announced that the Pixel 3 and Pixel 3 XL, its newest pair of flagship smartphones, would be the first to get a new photography mode that dramatically improves the quality of photos captured in low light: Night Sight. With the aid of machine learning and a bit of clever computational photography, Night Sight promised not only brighten pics and selfies captured in near-darkness, but restore their color and sharpness — all without the aid of a camera flash.
Night Sight made a premature debut in October courtesy a leaked app, but today marks the start of its official rollout. Beginning this week, Night Sight will come not only to the Pixel 3 and 3 XL, but to the Pixel 2 and original Pixel. And judging by our preliminary testing, it was well worth the wait.
Night Sight results
Check out this scene of a starless San Francisco evening. When Night Sight’s disabled, barely any of the homes on the horizon are visible — they’re a blurry, homogeneous inky-black smudge. Night Sight illuminates them brightly, restoring lost detail in the framing trees to the left and right.
Above: Night Sight disabled.
Image Credit: Kyle Wiggers / VentureBeat
Above: Night Sight enabled.
Image Credit: Kyle Wiggers / VentureBeat
It’s pretty much the same story in this photo set — albeit more dramatic. The entire valley becomes brighter with Night Sight switched on, from the grbad and trees to the lake and neighborhood across the way. Shrubbery that couldn’t previously be seen becomes plain as day.
Above: Night sight disabled.
Image Credit: Kyle Wiggers / VentureBeat
Above: Night Sight enabled.
Image Credit: Kyle Wiggers / VentureBeat
These shots are a bit more subtle, but notice the sky and the shadows cast by the rooftops of the skyscrapers. They’re noticeably more detailed than in the photo captured with the Pixel’s default shooting mode.
Above: Night Sight disabled.
Image Credit: Kyle Wiggers / VentureBeat
Above: Night Sight enabled.
Image Credit: Kyle Wiggers / VentureBeat
How to use Night Sight
Night Sight isn’t all that’s new. Two modes are joining the Pixel 3’s focus controls: Near and Far. The former focuses at 4 feet, and the latter focuses at 12 — a hyperfocal distance (everything from half of 6 feet to infinity is in focus).
When Night Sight hits your Pixel, it’ll kick in automatically in most cases — the camera app will surface a shortcut to Night Sight if lighting conditions are dim enough. Alternatively, you can prime it manually by swiping over to the app’s More tab and selecting Night.
Night Sight isn’t a cure-all, of course. Google recommends soft, uniform lighting to avoid creating dark shadows, and warns that “very bright light sources” in the field of view could produce lens flare artifacts. If you’re having trouble focusing, try tapping on a high-contrast edge or the edge of a light source. And if your pics come out blurry, find something stable you can brace your phone against.
One last item to note: The easiest way to increase exposure is by tapping on and moving the camera app’s exposure slider. Decreasing exposure, on the other hand, is best done after the fact in the Google Photos editor (it’ll be less noisy).
How Night Sight works
So how does it work, exactly?
Night Sight first tackles the problem of image noise — i.e., random variations in brightness from pixel to pixel. As Google Research project lead Marc Levoy and staff software engineer Yael Pritch Knaan note in a detailed blog post, it’s largely result of “moment-to-moment variations in the number of photos entering the lens,” and it’s something that every camera — but particularly smartphone cameras, which have small lenses and sensors — suffer from. To maker matters worse, it’s not the only source of noise — read noise, the result of random errors introduced when the electronic charge out of each pixel is read and converted to a number, introduces additional distortion.
Together, read noise and image noise contribute to a photo’s overall signal-to-noise (SNR) ratio, a measure of how much the image stands out from variations in brightness. Taking a longer exposure — a greater amount of light — produces a less noisy image, but as Levoy and Knaan point out, holding still long enough to take a good picture in dim light is easier said than done.
Above: Night Sight at dusk.
Image Credit: Kyle Wiggers / VentureBeat
Google’s high dynamic range plus (HDR+) mode partially mitigates this by capturing underexposed burst frames and aligning and merging them together in software. By using short exposures and automatically rejecting frames for which it can’t find a perfect alignment, it not only enhances dynamic range — i.e., color and contrast — but improves SNR, as well.
However, brightening photos captured in the dark isn’t that simple. Levoy and Knaan lay out the problem: Phone cameras start to struggle in extremely dim light of about 30 lux (the amount of light arriving at a surface per unit area, as measured in lumens per meter squared). That’s roughly the difference between a candlelit dinner and an average restaurant, and perfectly well and fine in most scenarios — but the goal with Night Sight was to reduce noise in pictures taken with 0.3-3 lux, with 3 lux being equivalent to a sidewalk lit by street lamps.
The exposure conundrum
The solution seems simple enough: lengthen the exposure time of each frame. But there’s two problems with that approach.
First of all, the Pixel uses zero-shutter-lag (ZSL), which minimizes the delay between triggering the shutter and when the photograph is actually recorded. That limits exposure time.
The camera app constantly captures image frames and stores them in a buffer that constantly erases old frames to make room for new ones; the most recent 9 or 15 are reserved for HDR+ or Super Res Zoom. Because the same images are displayed on the screen, though, simply increasing the exposure time of each frame would make the viewfinder feel unresponsive.
Above: Night Sight and a bright moon.
Image Credit: Kyle Wiggers / VentureBeat
Levoy and Knaan’s solution was to limit the exposure to 66 milliseconds most of the time, allowing the viewfinder to keep up with a display rate of 15 frames per second, but to enable positive-shutter lag (PSL) the moment the shutter button is pressed when Night Sight is enabled, allowing for longer exposures and improving signal-to-noise ratio.
Merging and aligning blurry photos
The second problem is motion blur — the result of shaky hands or moving objects.
The Pixel’s default picture-taking mode performs motion metering, which uses an optical flow to measure scene motion and choose and per-frame exposure time (up to 333 milliseconds) that minimizes blur. It even goes so far as to detect whether the phone might be on a tripod or held against a well (as measured by the gyroscope) to increase exposure to as high as 1 second. Additionally, it varies the number of frames captured depending on whether the phone’s on a tripod (up to 6 frames) or handheld (up to 15), and limits the total capture time to 6 seconds.
That all said, alignment and merging — known as exposure stacking in astrophotography — are arguably Night Sight’s secret sauce. It’s how it manages to reduce imaging noise. On the Pixel 1 and 2, a modified and retuned version of HDR+’s merging algorithm detects and rejects misaligned pieces of frames, and on the Pixel 3, the Super Res Zoom feature handles the heavy lifting. It was designed for super-resolution — it averages multiple images together to improve detail in zoomed-in shots — but also works to reduce noise.
Other challenges
Those weren’t the only challenges Levoy and Knaan had to overcome.
The auto white balance (AWB) algorithm used in non-Night Sight modes has trouble deciding what color illumination is in dim or strongly colored light, so they developed a bespoke, learning-based algorithm for Night Sight that’s trained to discriminate between a well-balanced image and a poorly balanced one. If it detects the latter, it automatically suggests how to shift the colors to make the illumination appear more natural.
Above: Foliage captured with Night Sight enabled.
Image Credit: Kyle Wiggers / VentureBeat
Tone mapping — mapping from raw pixel values to final brightness, in this case to brighten shadows while preserving overall perceived brightness — proved to be equally as problematic. On a high-end digital camera, taking a long exposure at night produces sharp, colorful photographs with detailed shadows. It doesn’t always achieve the desired effect, however, so Night Sight throws an S-curve into tone mapping.
“If you share the photograph with a friend, they’ll be confused about when you captured it,” Levoy and Knaan wrote. “It’s tricky to strike an effective balance between giving you magical super-powers while still reminding you when the photo was captured,” Lovoy and Knaan admitted.
Source link