Amazon transfers part of Alexa and Rekognition computing to its own Inferentia chip



[ad_1]

(Reuters– Amazon announced Thursday that it has moved some of the computing from its Alexa voice assistant to its own custom-designed chips, in an effort to make work faster and cheaper while moving it away from chips supplied by Nvidia.

When users of devices such as Amazon’s Echo line of smart speakers ask the voice assistant a question, the query is sent to one of Amazon’s data centers for multiple processing steps. When Amazon’s computers spit out a response, that response is in a text format that needs to be translated into audible speech for the voice assistant.

Amazon used to manage this computation using chips from Nvidia, but now the “majority” will be done using its own Inferentia computer chip. First announced in 2018, the Amazon chip is tailor-made to speed up large volumes of machine learning tasks, such as text-to-speech translation or image recognition.

Cloud computing customers such as Amazon, Microsoft and Alphabet’s Google have become some of the biggest buyers of computer chips, driving booming data center sales at Intel, Nvidia and others.

But big tech companies are increasingly abandoning traditional silicon vendors to design their own chips. Apple on Tuesday presented its first Mac computers with its own mainframe processors, moving away from Intel chips.

Amazon said switching to the Inferentia chip for some of its work on Alexa resulted in 25% higher latency, which is a measure of speed, at a 30% lower cost.

Amazon also said that “Rekognition,” its cloud-based facial recognition service, has started adopting its own Inferentia chips. However, the company did not specify which chips the facial recognition service had previously used or how much of the work had been transferred to its own chips.

The service has come under scrutiny by civil rights groups due to its use by law enforcement. Amazon in June put a one-year moratorium on its use by police following the murder of George Floyd.

(Reporting by Stephen Nellis in San Francisco. Edited by Tom Brown.)


Best practices for a successful AI center of excellence:

A guide for centers of excellence and business units Go here


[ad_2]

Source link