How the Google "Frankenphone" has learned at Pixel 3 AI to take portraits



[ad_1]

Learning artificial intelligence helps the Google Pixel 3 to surpass the only optical information by helping the smartphone's camera to distinguish the foreground from the background for better photos in portrait mode.

Learning artificial intelligence helps the Google Pixel 3 to surpass the only optical information by helping the smartphone's camera to distinguish the foreground from the background for better photos in portrait mode.

Google

A blurred portrait mode happened last year on Google's Pixel 2 smartphone, but the researchers used a group of five phones grouped together to enhance the function's operation in this year's Pixel 3.

The portrait mode simulates the shallow depth of field of high-end cameras and lenses that focuses your attention on the subject while transforming the background into a soft focus without distraction. However, it is difficult to do this simulation well and errors may remain. For the Pixel 3, Google has used artificial intelligence technology to solve problems.

For this to work, however, Google needed some photos to train his AI. Enter the quintet of phones sandwiched so that everyone takes the same picture under slightly different perspectives. These slight differences in perspective allow computers to judge the distance of each part of a scene from the cameras and generate a "depth map" used to determine the background content to be scrambled.

"We built our own platform" Frankenphone "which contains five Pixel 3 phones, as well as a Wi-Fi solution that allowed us to simultaneously capture images of all phones," said researcher Rahul Garg and the programmer Neal Wadhwa. Google blog article Thursday.

This technique shows how new software and image processing equipment is changing photography. Smartphones have small image sensors that can not compete with traditional cameras in terms of image quality, but Google is ahead of the market with computerized photography methods that make it possible to Fuzzy backgrounds, increase the resolution, refine the exposure, enhance the details of the shadows and dark.

So where does the Frankenphone come in? As a way of giving a vision of the world more like what we see with our own eyes.

Google built a "Frankenphone" to five phones to train its 3 AI pixel to judge the remoteness of elements of a scene.

Google

Humans can judge depth because we have two eyes separated by a short distance. This means that they see slightly different scenes – a difference called parallax. With its iPhone 7 two years ago, Apple took advantage of the parallax between its two rear cameras to take its first steps in portrait mode.

Google's pixels 2 and 3 have only one camera facing the back, but each pixel of a photo taken from a phone is actually created by two detectors of light, one to the left of a pixel site, the other on the right half. The view on the left is slightly different from the one on the right and this parallax is sufficient to judge depth information.

But not without problems, said Google. For example, he can only judge left-right parallax in a scene, not parallax up-down. So, Google gives the Pixel 3 a length ahead with the AI.

The artificial intelligence adds a lot of other information to the mix, for example slight differences in focus or awareness that a distant cat is smaller than a close cat. The way artificial intelligence works today, however, is that a model must be formed on real data. In this case, it meant taking photo quintets from Franken phones with left-right and up-down parallax information.

The artificial intelligence formed, combined with data from another artificial intelligence system detecting humans in photos, gives the pixels better capabilities in portrait mode.

CNET's Holiday Gift Guide: The place to find the best tech gifts for 2018.

Cambridge Analytica: Everything you need to know about the Facebook data mining scandal.

[ad_2]
Source link