Google unveils tiny new AI chips for machine learning on the device



[ad_1]

Two years ago, Google unveiled its Tensor or TPU processing units, specialized chips that are located in the company's data centers and facilitate the work of the AI. Now, the company is abandoning its expertise in cloud AI, and has withdrawn its new Edge TPU; a tiny AI accelerator that will perform machine learning tasks in IoT devices.

Edge TPU is designed to do what is called "inference". This is the part of machine learning where an algorithm actually performs the task for which it was formed; like, for example, recognizing an object in an image. The Google server-based TPUs are optimized for the training part of this process, while these new TPU Edge will make the inference.

These new chips are meant to be used in business jobs, not your next smartphone. This means tasks such as automating quality controls in factories. Doing this kind of work on the device has a number of advantages over the use of hardware that must send data over the Internet for badysis. Machine learning on the device is usually safer; experiences less time off; and provides faster results.


The Edge TPU is the little brother of the standard tensor processing unit, which Google uses to power its own AI, and which is available for other clients to use via Google Cloud.
Google

Google is not the only company to design chips for this kind of AI task on the device. ARM, Qualcomm, Mediatek and others all do their own AI accelerators, while Nvidia GPUs dominate the training algorithms market.

However, what Google has that its rivals are not controlling the entire AI stack. A customer can store their data on Google's cloud; train their algorithms using TPUs; then make an inference on the device with the help of the new TPU Edge. And, more than likely, they will create their machine-learning software using TensorFlow – a coding framework created and operated by Google.

This kind of vertical integration has obvious advantages. Google can ensure that all these parties talk to each other in the most efficient and fluid way possible, so that the customer can play (and stay) more easily in the business ecosystem.

Injong Rhee, vice president of IoT at Google Cloud, described the new hardware as an "ASIC chip specifically designed to run ML TensorFlow Lite models" in a blog post. Said Rhee: "The Edge TPUs are designed to complement our Cloud TPU offering, so you can speed up ML training in the cloud, then have a super-fast ML inference on the edge.Your sensors become more than just data collectors – they make smart, real-time local decisions. "

Interestingly, Google is also making the Edge TPU available as a development kit, which will allow customers to test the hardware, capacity, and see how that might work. This SDK includes a module-on-module (SOM) containing the Edge TPU, an NXP processor, a Microchip secure element and a Wi-Fi feature. It can connect to a computer or server via USB or a PCI Express expansion slot.These devkits are only available in beta, and potential customers will have to make an access request.

This may seem like a small pa news, but it's remarkable that Google does not usually let the public get their hands on AI hardware. However, if the company wants customers to embrace its technology, it must make sure that they can try it first, rather than just asking them to make a leap of faith in it. IA Googlesphere. This development panel is not just an appeal for business – it's a sign that Google is serious about owning the entire AI stack.

[ad_2]
Source link