Nvidia pushes ARM supercomputing | Ars Technica

Graphic chip manufacturer Nvidia is best known for consumer-based computing, with AMD's Radeon line for rats and eye effects. But the venerable giant has not ignored the rise of GPU-powered applications that have little or nothing to do with games. In the early 2000s, UNC researcher Mark Harris began to popularize the term "GPGPU", which refers to the use of graphics processing devices for non-graphics related tasks. But most of us were not really aware of the non-graphics capabilities of the GPU-powered bitcoin mining code released in 201[ads1]0, and shortly thereafter, strange boxes began packing almost solidly with high-end playing cards that appeared everywhere.
From digital currencies to supercomputing
The Association of Computers provides one or more $ 10,000 Gordon Bell awards each year to a research team that has made a break-out performance, scale, or time solution for challenging science and engineering problems. Five of the six participants in 2018-including both winning teams, the Oak Ridge National Laboratory and the Lawrence Berkeley National Laboratory-used Nvidia GPUs in their supercomputing arrays; The Lawrence Berkeley team contained six people from Nvidia itself.
In March this year, Nvidia acquired Mellanox, decision makers of the high-quality networking technology InfiniBand. (InfiniBand is often used as an alternative to Ethernet for massively high-speed connections between enterprise storage and computing stacks, with real throughput of up to 100 Gbps.) This is the same technology that the LBNL / Nvidia team used in 2018 to win a Gordon Bell award (with a project on deep learning for climate analysis).
The acquisition sent a clear signal (as Nvidia also stated clearly to anyone who was not aware) that the company was serious about the supercomputing room and not just
Moving towards a more open future
This strong story of research and acquisition emphasizes the importance of the movement Nvidia announced Monday morning at the International Supercomputing Conference in Frankfurt. The company makes its complete stack of supercomputer hardware and software available for ARM powered high performance computers, and it expects to complete the project by the end of 2019. In a Reuters interview, Nvidia described VP for accelerated computing Ian Buck's move as a "heavy lift" technical , requested by HPC researchers in Europe and Japan.
Most people know ARM best for powerful, relatively low performance (compared to traditional x86-64 builds of Intel and AMD) systems -chip used in smartphones, tablets and news components such as Raspberry Pi. At first blush, this ARM makes a strange choice for supercomputing. However, there is much more to the HPC than the individual stiff CPUs. On the technical side of things, data center computation generally bases just as much or more on massive parallelism as per wire performance. The typical Arm SOC's focus on energy efficiency means that much less power take-off and cooling is needed, so that several of them can be folded into a data center. This means a potentially lower cost, lower footprint and higher reliability for the same amount of data.
But licensing may be even more important, where ARM is Intel-IBM, AMD and AMD architecture. Unlike the x86-64 CPU manufacturers, ARM does not produce chips, it only allows its technology out to a wide variety of manufacturers who then build actual SOCs with it.
This architecture has a wide range of technologists, including developers who want to accelerate design cycles, security wonks concerned about the hardware quality of a Ken Thompson hack buried in a closed process and manufacturing process, and innovators attempting to reduce the cost barrier for entry planning.
Hopefully, Nvidia's move to support ARM in the HPC will separate to support multiple prosaic devices, which means cheaper, more powerful and friendlier devices in the consumer space.