New AI Chips Seek to Reshape Data Center Design, Cooling



[ad_1]

The Cerebras Wafer-Scale Engine (WSE) is optiomized for artificial intelligence workloads and is the largest chip ever built. (Image: Cerebras Systems)


The rise of artificial intelligence is transforming the business world. It could shake up the data center along the way.

Powerful new hardware for artificial intelligence (AI) workloads have the potential to reshape the design of data centers and how they are cooled. This week's Hot Chips Conference at Stanford University showcased a number of startups offering new silicon on the market.

The most startling new design came from Cerebras Systems, which came out of stealth mode with a chip that completely rethinks the form factor for data center computing. The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built, at nearly 9 inches in width. At 46.2 square millimeters, the WSE is 56 times larger than the largest graphics processing unit (GPU).

Is it better? Cerebras says size is "profoundly important" and its larger chips will process AI researchers to train algorithms for new tasks.

The Cerebras design offers a radical new take on the future of AI hardware. Cerebras' claims about its capabilities are its first products.

Cooling 15 Kilowatts per Chip

If it succeeds, Cerebras will push the existing boundaries of high-density computing. A single WSE contains 400,000 cores and 1.2 trillion transistors and uses 15 kilowatts of power.

I will repeat for clarity – a single WSE uses 15 kilowatts of power. For comparison, a recent survey by AFCOM found users were averaging 7.3 kilowatts of power for an entire rack, which can hold as many as 40 servers. Hyperscale providers average about 10 to 12 kilowatts per rack.

The heat thrown off by the Cerebras chips will require a different approach to cooling, as well as the server chassis. The WSE will be packaged as a server appliance, which will include a liquid cooling system that is reported to have a negative impact on the supply side.

A look at the manufacturing process for the Cerebras Wafer Scale Engine (WSE), which was fabricated at TSMC. (Image: Cerebras)

Most servers are designed to use air cooling, and thus more data centers are designed to use air cooling. A broad shift to liquid to the rack, which is often delivered through a system of pipes under a raised floor.

Google 's decision to shift to a new generation of hardware for artificial intelligence. Alibaba and other Chinese hyperscale companies have adopted liquid cooling.

Free Resource from Frontier White Paper Library Data Center

Computer Room Cooling

Regulations Determining Computer Room Cooling Selection

There is no shortage of misinformation defining the use of air conditioning equipment and the definition of comfort between applications and computer applications. To make sense of it all, it helps to understand the history and the evolution of the various requirements for computer room cooling. Download the new white paper from Schulz and help them choose the best cooling options for their businesses.

"Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state of the art by solving decades-old technical challenges that limited cross-reticle connectivity, yield, power delivery, and packaging, "said Andrew Feldman, founder and CEO of Cerebras Systems. "Every architectural decision has been made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, the performance of existing solutions to a tiny fraction of the power draw and space. "

Data center observers know Feldman as the founder and CEO of SeaMicro, an innovative server startup that packs more than 750 low-power Intel Atom chips into a single server chassis.

Much of the secret sauce for SeaMicro Thus, it is not surprising that Cerebras features an interprocessor fabric called Swarm that combines massive bandwidth and low latency. The company's investors include two pioneers, Andy Bechtolsheim and Nick McKeown.

For deep dives in Cerebras and its technology, see additional coverage in Fortune, TechCrunch, The New York Times and Wired.

New Form Factors Bring More Density, Cooling Challenges

We've been tracking the progress of the data center and the data center. Hardware hardware hardware. Hardware…….,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,..

Cerebras is one of a group of startups building AI chips and hardware. Intel Corp. The arrival of startup silicon on the Intel market. and rivals including NVIDIA, AMD and multiple players advancing ARM technology. Intel continues to hold a leading position in the enterprise computing space, but the development of powerful hardware has been a major trend in the high performance computing (HPC) sector.

This will not be the first time that the data center has had reckon with new form factors and higher-density. The introduction of blade servers in the field of management, management and management The rise of the Open Compute Project also introduced new standards, including a 21-inch rack that was slightly larger than the traditional 19-inch rack.

There is also the question of whether or not they will be able to reduce their energy consumption. cooling infrastructure.

For further reading, here are articles that summarize some of the key issues in the evolution of high-density hardware and the data center industry has adapted:

[ad_2]

Source link