Nvidia pushes the ARM supercomputer | Ars Technica



[ad_1]

Nvidia pushes supercomputing ARM

Graphics chip maker Nvidia is best known for consumer computing, rivaling AMD's Radeon line for framing processors and eye candy. But the venerable giant has not ignored the rise of applications powered by the graphics processor that have virtually nothing to do with games. In the early 2000s, UNC researcher Mark Harris began to popularize the term "GPGPU", referring to the use of graphic processing units for non-graphic tasks. But most of us had not really realized the non-graphical possibilities before the GPU-based mining code was published in 2010, and soon after, strange boxes filled with game cards high-end began to appear everywhere. .

From digital currency to supercomputing

Each year, the Association for Computing Machinery awards one or more $ 10,000 Gordon Bell Award to a research team that has done outstanding work in terms of performance, scale, or problem-solving time. technical and scientific complex. Five of the six entrants in 2018 – including the two winning teams, Oak Ridge National Lab and Lawrence Berkeley National Lab – used Nvidia GPUs in their arrays; Lawrence Berkeley's team included six people from Nvidia itself.

The impressive part of the segmentation masks superimposed on this map projection has nothing to do with antialiasing: there are more than 300 petaflops that are needed to analyze the value of atmospheric data from a whole planet. to produce them. "Src =" https: // cdn. arstechnica.net/wp-content/uploads/2019/06/segmentation-masks-640x308.png "width =" 640 "height =" 308 "srcset =" https://cdn.arstechnica.net/wp-content/uploads /2019/06/segmentation-masks.png 2x
Enlarge / The impressive part of the segmentation masks superimposed on this map projection has nothing to do with antialiasing: there are more than 300 petaflops that are needed to analyze the atmospheric data of an entire planet in order to produce them.

In March of this year, Nvidia acquired Mellanox, manufacturer of the InfiniBand high performance network interconnect technology. (InfiniBand is frequently used as an alternative to Ethernet for very high-speed connections between storage and enterprise computing stacks, with a real throughput of up to 100 Gbps.) It's the same technology that the LBNL / Nvidia team had used in 2018 to win the Gordon Bell Award Award (with a project on in-depth learning for climate analysis).

The acquisition made it clear (that Nvidia also made it clear to anyone who was not careful) that the company was serious about the supercomputing space and that she was not just looking for optics to gain a foothold in the consumer market.

Towards a more open future

This solid history of research and acquisition underscores the importance of Nvidia's announcement of Monday morning at the International Conference on High Performance Computing in Frankfurt. The company puts all its computers and supercomputing software at the disposal of high performance computers powered by ARM. It is expected to complete the project by the end of 2019. In an interview with Reuters, Nvidia's vice president of accelerated IT, Ian Buck, described the situation. technically, at the request of HPC researchers in Europe and Japan.

Most people are familiar with ARM for energy-efficient and relatively inefficient systems-on-chips (compared to traditional Intel and AMD x86-64 versions) used in smartphones, tablets, and fancy devices such as Raspberry Pi At first glance, this makes ARM a strange choice for compute intensive computing. However, high performance computing is much more than just a processor. On a technical level, datacenter-level computing generally relies as much, if not more, on massive parallelism than on per-thread performance. By focusing on energy efficiency, Arm SOC typically offers much less power and cooling, which allows for more clustering in a data center. This means a potentially lower cost, reduced footprint, and increased reliability for the same amount of computer.

But the license is potentially even more important – where the Intel, IBM and AMD architectures are closed and the owners, the ARMs are largely open. Unlike the x86-64 processor manufacturers, ARM does not manufacture chips, it simply gives up its technology to a wide range of manufacturers who then build true SOCs with it.

Pinebook goes from its original $ 99 laptop to this future $ 199 daily driver, the Pinebook Pro magnesium alloy. It would be impossible for such a small company to compete with Chromebooks on x86 hardware for pricing. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2019/06/PinebookProISO1- 640x578.jpg "width =" 640 "height =" 578 ​​"srcset =" https: // cdn .arstechnica.net / wp-content / uploads / 2019/06 / PinebookProISO1-1280x1156.jpg 2x
Enlarge / Pinebook goes from its original $ 99 laptop to this future $ 199 daily driver, the Pinebook Pro magnesium alloy. It would be impossible for such a small company to compete with the Chromebooks on x86 hardware with respect to pricing.

This open-architecture hardware design appeals to a wide range of technologists, including developers eager to accelerate design cycles, security fools worried about the hardware equivalent of a Ken Thompson hack buried in a closed design process and CPU manufacturing, and innovators trying to cut costs. the cost barrier of the entry-level computer.

Hopefully, Nvidia's decision to support ARM in HPC will also result in more prosaic device support, which means less expensive, more powerful, and more user-friendly peripherals in the consumer space.

[ad_2]

Source link