Posts

White Paper: An Open Ethernet Switch ISA

Image
In the AI era, silicon-design cycles have become a limiting factor in feature velocity. Network programmability has never been more critical, yet flexibility cannot come at the cost of performance and power efficiency. With its X-Switch architecture, Xsight Labs targeted the sweet spot for data-center switching, balancing performance and power while maximizing programmability. Now, to encourage innovation and adoption, the company opened the instruction sets for its X-Switch programmable Ethernet switch architecture. Xsight Labs sponsored the creation of this white paper, but the opinions and analysis are those of the author. Download the  white paper  for free, no registration required. X-Switch Scalable Architecture

NVIDIA Pivots as Networking Stalls

Image
Yes, $11B in Blackwell revenue is impressive. Yes, Nvidia's data-center revenue grew 93% year over year. Under the surface, however, there's trouble in networking. In the January quarter (Q4 FY25), networking revenue declined 9% year over year and 3% sequentially. In its earnings call, CFO Collette Kress said that Nvidia's networking attach rate was "robust" at more than 75%. Her very next sentence, however, hinted at what's happening underneath that supposed robustness. "We are transitioning from small NVLink8 with InfiniBand to large NVLink72 with Spectrum-X," said Kress. About one year ago, Nvidia positioned InfiniBand for "AI factories" and Spectrum-X for multi-tenant clouds. That positioning collapsed when the company revealed xAI selected Spectrum-X for what is clearly an AI factory. InfiniBand appears to be retreating to its legacy HPC market while Ethernet comes to the fore. Nvidia Data-Center Revenue So how do we square 93% DC grow...

White Paper: Xsight Softens the DPU

Image
Powering SmartNICs, the data-processing unit (DPU) has become nearly ubiquitous in the leading public clouds. Existing designs maximize power efficiency for a constrained feature set, and they require proprietary software tools. Xsight Labs aims to break this paradigm with its new E1 DPU, which promises the openness of an Arm server CPU. Xsight Labs sponsored the creation of this white paper, but the opinions and analysis are those of the author. Download the white paper for free, no registration required. Xsight E1 DPU

White Paper: Xsight Recharges the Cloud ToR

Image
Cloud-datacenter operators are driving rapid adoption of 800Gbps optical modules while also upgrading compute-server NICs to 400Gbps speeds. The 51.2Tbps switch chips designed for these network fabrics, however, deliver too much capacity for top-of-rack switch systems. With its X2, Xsight Labs developed a unique chip aimed at optimizing compute racks by enabling 100Gbps-per-lane server links and 800Gbps uplink optics. Xsight Labs sponsored the creation of this white paper, but the opinions and analysis are those of the author. Download the white paper for free, no registration required. Xsight X2

AI Unsurprisingly Dominates Hot Chips 2024

Image
This year's edition of the annual Hot Chips conference represented the peak in the generative-AI hype cycle. Consistent with the theme, OpenAI's Trevor Cai made the bull case for AI compute in his keynote. At a conference known for technical disclosures, however, the presentations from merchant chip vendors were disappointing; despite a great lineup of talks, few new details emerged. Nvidia's Blackwell presentation mostly rehashed previously disclosed information. In a picture-is-worth-a-thousand-words moment, however, one slide included the photo of the GB200 NVL36 rack shown below. GB200 NVL36 rack (Source: Nvidia) Many customers prefer the NVL36 over the power-hungry NVL72 configuration, which requires a massive 120kW per rack. The key difference for our readers is that the NVLink switch trays shown in the middle of the rack have front-panel cages, whereas the "non-scalable" NVLink switch tray used in the NVL72 has only back-panel connectors for the NVLink spin...

NVIDIA Reveals Roadmap at Computex

Image
The annual Computex trade show in Taipei has traditionally been PC-centric, with ODMs showing their latest motherboards and systems. The 2024 event, however, included keynotes from Nvidia and others that revealed details of forthcoming datacenter GPUs, demonstrating the importance of the ODM ecosystem to the explosion of AI. The fact that Jensen Huang was born on the island made his keynote all the more impactful for the local audience. In the week following the CEO's keynote, Nvidia's market capitalization surpassed $3 trillion. From a networking perspective, the keynote focused on Ethernet rather than InfiniBand, as the former is a better fit in the ecosystem messaging. Source: NVIDIA The datacenter section of Jensen's talk largely reminded the audience of what Nvidia announced at GTC in March. The Blackwell GPU, now in production, introduces NVLink5, which operates at 200Gbps per lane. It includes 18 NVLink ports with two lanes each, or 36x200Gbps serdes. The new NVLink...

PAM4 DSPs Battle LPO for OFC Mindshare

Image
Last year, module vendors demonstrated the first 1.6T optical modules, and this year DSP vendors looked ahead to second-generation 1.6T module designs. Whereas the first 1.6T modules connect a 16x100G host interface to 8x200G optics (16:8), next-generation designs will work with forthcoming 200G/lane switch ASICs, as shown in the top row of the figure. Broadcom disclosed its Sian2 1.6T 8:8 DSP at a March investor event, and Marvell followed by announcing its similar Nova 2 at OFC. Not wanting to be left out of the 1.6T landscape, MaxLinear pre-announced Rushmore, which similarly targets 8:8 designs. Although the company withheld product details, it disclosed Samsung Foundry as its manufacturing partner for Rushmore, setting it apart from competitors using TSMC. Source: Broadcom Progress on linear pluggable optics (LPO) and other less-than-full-DSP variants was evident at 100G/lane, but vendors also set the stage for 200G/lane. Last November, Credo Semiconductor was first to announc...