Amazon Elastic Inference will reduce the costs of deep learning by approximately 75% – TechCrunch



[ad_1]

Amazon Web Services announced today Amazon Elastic Inference, a new service that allows customers to connect graphical processor-based inference acceleration to any Amazon EC2 instance and reduce the cost Learning in depth up to 75%.

"What we generally see is that the average utilization of these GPUs of P3 instances is about 10 to 30%, which is rather useless with an elastic inference. You do not have to waste all those costs and all that GPU, "said Andy Jbady, ​​general manager of AWS, on stage at the AWS re: Invent conference earlier in the day. "[Amazon Elastic Inference] is a game changer important enough to be able to execute the inference in a much more profitable way. "

Amazon Elastic Inference will also be available for instances and endpoints in the Amazon Sage notebook, "bringing acceleration to embedded algorithms and deep learning environments," the company writes in a blog post. It will support the TensorFlow, Apache MXNet and ONNX machine learning infrastructures.

We want our customers to have the right tools for the job. We are pleased to announce that Amazon Elastic Inference allows customers to add elastic CPU support for scalable inference on all EC2 instances, delivering significant savings. costs. #reInvent pic.twitter.com/7rbaM5O5QF

– AWS re: Invent (@AWSreInvent) November 28, 2018

It is available in three sizes:

  • eia1.medium: 8 TeraFLOP of variable precision performance.
  • eia1.large: 16 TeraFLOP with variable precision performance.
  • eia1.xlarge: 32 TeraFLOP of variable precision performance.

Dive deeper into the new service here.

more coverage AWS re: Invent 2018

[ad_2]
Source link