Amazon is ahead of Microsoft, Google and IBM and launches chips to conquer business in the cloud



[ad_1]

cloud computing

For some workloads, such as web servers, costs could be reduced by 45% with this technology.

Amazon's Web Services (AWS), the cloud division of Amazon, announced Monday the launch of computer platforms based on server chips using the ARM architecture and offering less costly options to customers. .

The announcement took place during the "re: invent" conference of the American city of Las Vegas, where more than 50,000 people called by AWS were able to discover all the news of the company, in addition to training in its technologies.

Earlier this year, AWS announced such platforms based on AMD chips, available in conjunction with Intel-based platforms.

To get an idea of ​​what Jeff Bezos' cloud computing business represents, the following data is valid: AWS generated revenue growth of 46 percent in the third quarter of this year, while Amazon's online stores grew 10 percent.

It is in this spirit that Amazon's cloud business is coming to the market with cheaper options, thanks to server chips using the ARM processing architecture.

AWS, the undisputed leader in the public cloud infrastructure market, is the first of the leading cloud providers to launch ARM-based computing resources.

At the same time, Microsoft, Google, IBM, and other companies are competing with Amazon for providing a public cloud infrastructure that companies rely on to store their data and run their applications.

In these clouds – and in local business data centers – client computing workloads are often run on chips based on Intel technology.

ARM, owned by SoftBank and whose technology is widely used for chips in smartphones and tablets, has long been considered a potential alternative that can run on servers consuming less energy, which could lead to reduced costs.

The AWS EC2 A1 computer platforms are based on Annapurna's ARV-based Graviton processor, the AWS chipset development group.

Peter DeSantis, Vice President of Global Infrastructure at AWS, said during his presentation at the conference that "For some workloads, such as web servers, costs could be reduced by 45 percent with this technology."

It should be noted that Amazon bought Annapurna in 2015. Previously, she had hired several people at Calxeda, a company that was working on ARM-based server systems.

While e-commerce continues to generate the bulk of Amazon's revenue, AWS has become essential to the financial health of the business. In the third quarter of this year, more than half of Amazon's operating profits came from this segment.

AWS now offers more than 125 services to its customers, including the EC2 Computing Center. A1 platforms are now available in four of the AWS data center regions around the world.

The battle of french fries

The company did not provide a lot of details about the processor itself, but DeSantis said that "it is designed for workloads with horizontal scalability that take advantage of the large number of servers that detect a problem."

EC2 A1 can run applications written for Amazon Linux, Red Hat Enterprise Linux and Ubuntu and will be available in four regions: Northern Virginia, Ohio and Oregon (USA) and Ireland.

This strategy aims to address Intel's dominance of the server processor market, both in the cloud and on local systems.

AMD has tried to challenge this leadership over the years without success, although its new Epyc processors have been well received by server buyers and cloud-based companies.

But many companies have tried – and failed – to create attractive server processors using the ARM architecture, which holds the same market share of mobile phones as Intel in the data center.

ARM designs processor cores that other companies use at the heart of their own chip designs. Companies such as Qualcomm and Ampere have attempted to create a business around these designs.

"The cloud leader has custom-integrated silicon into its data centers at a progressive pace for specific workloads such as machine learning," said DeSantis.

An accelerator for the cloud

AWS also announced a network service that allows for "automatic routing" of traffic to multiple regions.

AWS Global Accelerator was introduced by DeSantis as an improvement in availability and performance for end users of AWS customers.

How it works? User traffic between AWS Global Accelerator via the nearest location. There, the accelerator "routes" user traffic to the nearest endpoint of the AWS global network.

Finally, the response from the application is returned via the global AWS network and reaches the user via the optimal endpoint.

The load is directed based on your geographic location, the state of the application, and the weights that the AWS client can configure. The new service also badigns static IP addresses to Anycast.

In AWS Global Accelerator, customers pay for each accelerator deployed and the amount of traffic in the dominant direction that pbades through the accelerator. The company expects its customers to configure an accelerator for each application, but more complex applications may require multiple applications.

Users will have to pay a fixed hourly rate ($ 0.025) for each accelerator running, on standard data transfer rates.

Standard data transfer rates, also known as premium data transfer rates (DT-Premium), are calculated hourly in the dominant traffic direction, which is the incoming traffic to an application. or traffic coming from an application to the traffic. end users

DT-Premium is charged at a rate per gigabyte of data transferred over the AWS network, and the cost depends on the AWS region serving the AWS request and location device. AWS Global Accelerator is available in the United States, Europe and Asia-Pacific.

The availability of the AWS transit gateway, recommended by the company as a way to create a star network topology, was also announced.

DeSantis explained: "Users can connect existing virtual machines, data centers, remote offices, and remote gateways to a managed transit gateway, with full control over network routing and security.

This is a way to simplify the network architecture, reduce operational costs and centrally manage external connectivity. "

In addition, DeSantis ensured that users could also use Transit Gateways to consolidate connectivity to the existing boundary and route it to a single point of entry / exit.

"Customers can connect up to 5,000 virtual machines to each gateway and each of them can handle up to 50 Gbps of traffic," concluded the executive.

Check out the latest news on the digital economy, startups, fintech, business innovation and blockchain

[ad_2]
Source link