Nvidia aims to unify AI, HPC computing in HGX-2 server platform

Nvidia is refining its pitch for data-center performance and efficiency with a new server platform, the HGX-2, designed to harness the power of 16 Tesla V100 Tensor Core GPUs to satisfy requirements for both AI and high-performance computing (HPC) workloads.

Data-center server makers Lenovo, Supermicro, Wiwynn and QCT said they would ship HGX-2 systems by the end of the year. Some of the biggest customers for HGX-2 systems are likely to be hyperscale providers, so it’s no surprise that Foxconn, Inventec, Quanta and Wistron are also expected to manufacture servers that use the new platform for cloud data centers.  

The HGX-2 is built using two GPU baseboards that link the Tesla GPUs via NVSwitch interconnect fabric. The HGX-2 baseboards handle 8 processors each, for a total of 16 GPUs. The HGX-1, announced a year ago, handled only 8 GPUs.

Nvidia describes the HGX-2 as a “building block” around which servers makers can build systems that can be tuned to different tasks. It’s the same systems platform on which Nvidia’s own, upcoming DGX-2 is based. The news here is that the company is making the platform available to server makers along with a reference architecture so that systems can ship by the end of the year.  

Leave a Reply

Your email address will not be published. Required fields are marked *