قالب وردپرس درنا توس
Home / Business / New AI chips try to reshape data center design, cooling

New AI chips try to reshape data center design, cooling



The Cerebras Wafer-Scale Engine (WSE) is optimized for workloads with artificial intelligence and is the largest piece ever built. (Image: Cerebras Systems)


The rise of artificial intelligence is transforming business. It can shake the data center along the way.

Powerful new artificial intelligence (AI) workload hardware has the potential to reshape data center design and how it is cooled. This week's Hot Chips conference at Stanford University featured a number of startups offering new offerings on custom AI silicon, as well as new offers from established companies.

The most startling new design came from Cerebras Systems, which came out of stealth mode with a piece that completely rethinks the form factor for data center computation. The Cerebras Wafer-Scale Engine (WSE) is the largest piece ever built, with nearly 9 inches in width. At 46.2 square meters, the WSE is 56 times larger than the largest graphical processor (GPU).

Is bigger better? Cerebras says size is "deeply important" and that its larger pieces will process information faster, reducing the time it takes AI researchers to train algorithms for new tasks.

The Cerebras design offers a radical new influence on the future of AI hardware. The first products have not yet come on the market, and analysts are keen to see if performance testing validates Cerebras' claims of its capabilities.

Cooling 15 kilowatts per chip

If successful, Cerebras will push the existing high-density computing boundaries, a trend that is already beginning to create both opportunities and challenges for data center operators. A single WSE contains 400,000 cores and 1.2 trillion transistors and uses 15 kilowatts of power.

I would repeat for clarity – a single WSE uses 15 kilowatts power. In comparison, a recent study by AFCOM found that users averaged 7.3 kilowatts of power for an entire rack, which can hold as many as 40 servers. Hyperscale vendors average 10 to 12 kilowatts per rack.

The heat thrown by the Cerebras chips will require a different approach to cooling, as well as the server chassis. The WSE will be packaged as a server device, which will contain a liquid cooling system that reportedly has a cold plate fed by a series of tubes, with the chip placed vertically in the chassis to better cool the entire surface of the huge chip. [19659011] A look at the production process for the Cerebras Wafer Scale Engine (WSE), which was produced at TSMC. (Image: Cerebras)

Most servers are designed to use air cooling, and thus most data centers are designed to use air cooling. A broad shift to liquid cooling would prompt data center operators to support water for the rack, which is often delivered through a pipe system under a raised floor.

Google's decision to switch to liquid cooling with the latest artificial intelligence hardware has raised expectations that others may follow. Alibaba and other Chinese hyperscale companies have introduced liquid cooling.

Free resource from Data Center Frontier White Paper Library

  Computer Room Cooling

Computer Room Cooling Selection Regulations

There is no shortage of misinformation that defines the proper use of air conditioning and specifies the delimitation between comfort cooling and computer room applications. To understand it all, it helps to understand the history and evolution of the different requirements for cooling a computer room. Download the new Schulz White Paper, which guides readers through the ins and outs of the changing requirements for data room cooling, and helps them choose the best cooling options for businesses.

"Designed from the foundations of AI work, Cerebras WSE incorporates basic innovations that advance modern technology by addressing decades-old technical challenges such as limited chip size – for example, cross-link connectivity, yield, power delivery and packaging," said Andrew Feldman , founder and CEO of Cerebras Systems. “Every architectural decision was made to optimize the performance of AI work. As a result, depending on the workload, Cerebras WSE delivers hundreds or thousands of times the performance of existing solutions to a small fraction of the power and space. "

Data Center observers know Feldman as the founder and CEO of SeaMicro, an innovative server startup that packed more than 750 low-power Intel Atom chips into a single server chassis.

Much of the secret sauce for SeaMicro was in the network fabric It is therefore not surprising that Cerebras has an inter-processor substance called Swarm that combines massive bandwidth and low latency. The company's investors include two network pioneers, Andy Bechtolsheim and Nick McKeown.

coverage in Fortune, TechCrunch, The New York Times and Wired.

New Form Factors Bring More Density, Cooling Challenges

We have been following advances in rack density and fluid adoption for many years at the Data Center Frontier as part of our focus on new technologies and how they can transform the data center. New hardware for AI workloads packs more computing power in each equipment, increases power density – the amount of power used by servers and storage in a rack or closet – and the accompanying heat.

Cerebras is one of a group of startups that build AI chips and hardware. The arrival of startup silicon into the AI ​​computing market follows several years of intense competition between chip market leader Intel Corp. and competitors including NVIDIA, AMD and several players promoting ARM technology. Intel continues to occupy a dominant position in the enterprise field, but the development of powerful new hardware optimized for specific workloads has been a major trend in the high-performance HPC sector.

This will not be the first time that the data center market has had to expect new form factors and higher densities. The introduction of blade servers packed dozens of server boards into each chassis, bringing higher heat loads that many data center managers struggled to manage. The rise of the Open Compute Project also introduced new standards, including a 21-inch rack that was slightly wider than the traditional 19-inch rack.

There is also the question of whether the increase in powerful AI devices will compress more computing power into a smaller space, requesting redesign or retrofitting for liquid cooling, or whether high density will be spread in existing plants to distribute their impact on existing power and cooling infrastructure.

For further reading, here are articles that summarize some of the key issues in the development of high-density hardware and how the data center industry has adapted:


Source link