Google has created a "Frankenstein cell phone" to form the camera's AI pixel



[ad_1]

Google Pixel is imposed as an exception with regard to the number of cameras. While all leading manufacturers place two or even three sensors at the back of devices, Google is putting on machine learning to get better photos through software, including the back-blur feature – widespread background, which normally requires the use of a second camera.

To understand how this happened, Google engineers published detailed information about their work with the camera of a smartphone. While in Pixel 2 this blur effect was achieved by capturing two photos of the object, the new model of the line further enhanced a technology that already drew the attention of the one who had put the smartphone to the hand.

Google Pixel 3 estimates the distance between objects and the camera relative to the number of pixels in the photos. In this way, it is able to create a depth map and only "blur" what is far away, highlighting the items desired by the user. But for the software to perform this task accurately, it was necessary to train its artificial intelligence with a large number of photos.

 The "Franskenstein phone" created by Google's artificial intelligence camera. It is here that was born the idea of ​​creating a "Frankenstein cell phone", consisting of five different pixels. Using a Wi-Fi solution, the engineers were able to take photos simultaneously, which helped the software understand how our eyes perceive distance and depth problems in a real-world context.

The heavy work done by an artificial intelligence is also a good sign that the photos captured by these smartphones should be getting better and better as the program continues to be formed and updated by the company. It seems that the camera should remain one of the main differentiators of the Pixel line for many years.

[ad_2]
Source link