Why Google thinks that machine learning is its future



[ad_1]

Google CEO Sundar Pichai will speak at the Google I / O Developer Conference on May 7, 2019.
Enlarge / Google CEO Sundar Pichai will speak at the Google I / O Developer Conference on May 7, 2019.

David Paul Morris / Bloomberg via Getty Images

One of the most interesting demonstrations of this week's Google I / O program talk included a new version of Google's voice assistant that is expected to appear later this year. A Google employee asked the Google assistant to display his photos and then show them with animals. She typed one and said "send it to Justin". The photo was filed in the courier application.

From there, things became more impressive.

"Hey Google, send Jessica an email," she says. "Hi Jessica, I just got back from Yellowstone and fell in love with it." The phone transcribed his words, putting "Hi Jessica" on his own line.

"Submitted to adventures in Yellowstone," she says. The assistant understood that he should put "Yellowstone Adventures" in the subject, not in the body of the message.

Then, without any explicit order, the woman took the dictation of the body of the message. Finally, she said "send it", and the Google assistant did it.

Google is also working to broaden the wizard's understanding of personal referrals, the company said. If a user says, "Hey Google, what's the weather like at his mother's?", Google will be able to understand that "her mother's house" refers to her mother's house, looks for her address, and provides her with a forecast. meteorological. city.

Google claims that its next-generation assistant is addressing the "new Pixel phones", that is to say the phones that will come after the current Pixel 3 line, later this year.

Obviously, there is a big difference between a canned demo and a shipped product. We will have to wait and see if the typical interactions with the new assistant work as well. But Google seems to progress steadily in the realization of his dream: to create a virtual assistant capable of competently managing even the most complex tasks.

This was announced by I / O: it was not the announcement of major new products, but the use of machine learning techniques to gradually make a more sophisticated and useful Google product line. Google has also highlighted a number of improvements under the hood of its machine learning software, which will allow software created by Google and others to use more sophisticated machine learning techniques.

In particular, Google strives to move cloud machine learning operations to users' mobile devices. This should allow applications using ML to be faster, more private, and able to operate in disconnected mode.

Google led the charge on machine learning

A printed circuit board containing Google's tensor processor. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2019/05/TPU_V3_BOARD_OVERHEAD_WHITE_FORWEBONLY_FIN.max-2200x2200-640x329.png "width =" 640 "height" 329 "srcset =" https: // cdn.arstechnica.net/wp-content/uploads/2019/05/TPU_V3_BOARD_OVERHEAD_WHITE_FORWEBONLY_FIN.max-2200x2200-1280x658.png 2x
Enlarge / A printed circuit board containing Google's tensor processor.

Google

If you ask machine learning experts when the current boom in in-depth learning has begun, many will be referring to a 2012 article titled "AlexNet" according to lead author Alex Krizhevsky. The authors, a trio of researchers from the University of Toronto, participated in the ImageNet contest to rank the images in one of a thousand categories.

ImageNet organizers have provided more than one million examples of labeled images to form networks. AlexNet has achieved unprecedented accuracy using a deep neural network, with eight training layers and 650,000 neurons. They have been able to form such a large network of so much data because they have discovered how to exploit GPUs that are designed for large-scale parallel processing.

AlexNet has demonstrated the importance of what could be called the three-foot stool for in-depth learning: better algorithms, more learning data and more computational power . Over the last seven years, companies have been striving to build capacity on all three fronts, improving their performance.

Google leads this charge almost from the beginning. Two years after AlexNet won an image recognition competition called ImageNet in 2012, Google entered the competition with an even deeper neural network and won first prize. The company has hired dozens of leading machine learning experts, including the 2014 acquisition of DeepMind, a deep-learning start-up, which allows it to stay at the forefront. neural network design.

The company also has unmatched access to large data sets. A 2013 article described how Google used deep neural networks to recognize address numbers in tens of millions of images captured by Google Street View.

Google has also worked hard on the hardware front. In 2016, Google announced the creation of a custom chip called Tensor Processing Unit specifically designed to accelerate the operations used by neural networks.

"Although Google was planning to build a dedicated application-specific integrated circuit (ASIC) for neural networks as early as 2006, the situation became urgent in 2013," wrote Google in 2017. "That's when that we have realized that the growing requirements in calculating neural networks could force us to double the number of data centers we operate. "

That's why Google I / O has been interested in machine learning for three years. The company believes that these strengths – a small army of machine learning experts, large amounts of data, and its own custom silicon – make it ideally positioned to exploit the opportunities offered by machine learning.

Google's I / O this year has not actually announced a lot of new products related to ML because the company has already incorporated machine learning to most of its major products. Android has had voice recognition and Google's assistant for years. Google Photos has long had an impressive search function based on ML. Last year, Google introduced Google Duplex, which makes a reservation on behalf of a user with a strangely realistic human voice created by software.

Instead, I / O presentations on machine learning focused on two areas: transferring more machine learning activities to smartphones and using machine learning to help people disadvantaged, including deaf, illiterate or cancer patients.

Squeezing Machine Learning on smartphones

Justin Sullivan / Getty Images

Past efforts to make neural networks more precise have meant making them deeper and more complicated. This approach has produced impressive results, but it has big drawbacks: networks are often too complex to be exploited on smartphones.

Most of the time, people have solved the problem by unloading the calculations on the cloud. Earlier versions of Google and Apple's voice assistants recorded audio data and uploaded it to the company's servers for processing. It worked well, but it has three major drawbacks: higher latency, lower privacy protection, and functionality only working offline.

So, Google has been working to move more and more computing on device. Current Android devices already have basic voice recognition capabilities on the device, but the Google virtual assistant requires an Internet connection. Google says it will change later this year with a new offline mode for Google Assistant.

This new feature largely explains the extremely fast response times demonstrated by this week's demo. Google indicates that the wizard will be "up to 10 times faster" for some tasks.

[ad_2]

Source link