NVIDIA Reveals Roadmap at Computex

The annual Computex trade show in Taipei has traditionally been PC-centric, with ODMs showing their latest motherboards and systems. The 2024 event, however, included keynotes from Nvidia and others that revealed details of forthcoming datacenter GPUs, demonstrating the importance of the ODM ecosystem to the explosion of AI. The fact that Jensen Huang was born on the island made his keynote all the more impactful for the local audience. In the week following the CEO's keynote, Nvidia's market capitalization surpassed $3 trillion. From a networking perspective, the keynote focused on Ethernet rather than InfiniBand, as the former is a better fit in the ecosystem messaging.

Source: NVIDIA

The datacenter section of Jensen's talk largely reminded the audience of what Nvidia announced at GTC in March. The Blackwell GPU, now in production, introduces NVLink5, which operates at 200Gbps per lane. It includes 18 NVLink ports with two lanes each, or 36x200Gbps serdes. The new NVLink5 switch ASIC handles 72 ports for an aggregate 28.8Tbps of bandwidth. The start product of GTC was the GB200 NVL72 rack, which connects 72 Blackwell GPUs into a single logical GPU with 13.8TB of high-bandwidth memory (HBM). Nvidia emphasized that the NVLink5 backplane used 5,184 passive copper cables, as this was the lowest power design.

At Computex, Jensen displayed the NVL72 "spine," which differs from the renderings shown at GTC. The actual implementation houses the thousands of copper cables in a vertical chassis, which somehow mates to 18 compute trays and 9 NVLink switch trays. Jensen referred to the NVLink spine as an "electro-mechanical miracle." He repeated the claim that the passive design saves 20kW per rack, but that comparison is with a theoretical optical implementation. As a reminder, NVLink does not replace InfiniBand or Ethernet, which still serve as the back-end (or scale-out) network. Although Nvidia says that NVLink5 can scale to 576 GPUs, the NVL72 rack appears optimal for training the latest large-language models.

LightCounting subscribers can access additional details, figures, and analysis in a full-length research note: www.lightcounting.com/login

Comments

Popular posts from this blog

AMD Looks to Infinity for AI Interconnects

NVIDIA Networks NVLink

NVIDIA Reveals DGX GH200 System Architecture