CXL Chip Market Poised for Rapid Growth

It's not often that we see a new interconnect come along that's a sure thing. By piggybacking on the PCI Express physical layer, however, CXL has become one of those rare birds. As is always the case with new technologies, it will take time for a multi-vendor ecosystem to mature. CXL offers many incremental steps along the architectural-evolution path, allowing the technology to ramp quickly while offering future iterations that enable truly composable systems.

It All Starts with Server CPUs

Although not officially launched, Intel's Sapphire Rapids is already shipping to early customers. Development platforms are also in partners' hands, enabling validation and testing of CXL components. AMD's Genoa is also about to launch with CXL support. The caveat for both vendors is that these first CPUs support only CXL 1.1, which lacks important features incorporated in the CXL 2.0 specification. Both versions ride atop PCIe Gen5, however, so the physical layer needn't change.

The figure below shows our forecast for CXL-enabled servers by technology generation. We expect Granite Rapids will ship in 2024, and Intel disclosed that the processor will offer CXL 2.0. Likewise, we expect AMD's Turin will ship in 2024 and also support the 2.0 specification. As a result, we see CXL 2.0 host shipments quickly overtaking CXL 1.1 platforms. The timing of servers with CXL 3.0 is more speculative, but those platforms should appear in 2026. Note that these systems also mark the transition to PCIe Gen6, which uses PAM4 modulation plus lightweight forward error correction (FEC) to double the per-lane rate. By 2026, we expect virtually all new servers will handle CXL.



Memory Expansion and Pooling Creates New Chip Segments

The first CXL use cases revolve around memory expansion, starting with single-host configurations. The simplest example is a CXL memory module, such as Samsung's 512GB DDR5 memory expander with a PCIe Gen5 x8 interface in an EDSFF form factor. This module uses a CXL memory controller from Montage Technology, and the vendors claim support for CXL 2.0.  Similarly, Astera Labs offers a DDR5 controller chip with a CXL 2.0 x16 interface. The company developed a PCIe add-in card combining its Leo controller chip with four RDIMM slots that handle up to a combined 2TB of DDR5 DRAM.

CXL-attached memory can increase bandwidth and capacity, but it also increases access latency relative to DRAM attached directly to a CPU's integrated memory controllers. In fact, CXL introduces zero-core NUMA domains, creating a new memory tier. Until software can be tuned to better handle tiered memory, it's important to minimize access latency. This factor creates a barrier to adoption for CXL switch chips, which offer a simple path to memory pooling. By connecting multiple hosts (servers) to multiple CXL expanders, switch chips enable a pool of memory that can be flexibly allocated across hosts.

To eliminate the added latency of a switch hop, multiple vendors are developing CXL-expander chips with multiple host interfaces, or heads. These multi-headed devices allow a small number of hosts to share a memory pool. For example, Astera's expander chip can be configured with two x8 host interfaces. Startup Tanzanite Silicon Solutions demonstrated an FPGA-based prototype with four heads prior to its acquisition by Marvell. At last month's OCP Summit, Marvell disclosed a roadmap to eight x8 hosts in a forthcoming chip. These multi-headed controllers can form the heart of a memory appliance offering a pool of DRAM to a small number of servers.

Because memory pooling can alleviate the problem of stranded memory, we expect hyperscale data-center operators to adopt pooled expanders in the near term. As a result, we forecast single-host and pooled expanders will grow in parallel, as the figure below shows.


Sharp-eyed readers will note a third category begins to emerge in 2026. Shared-memory expanders will appear as CXL 3.x develops into a true fabric. We will explore GPU uses cases for CXL 3.x in a future post. For more information on our CXL Chip Forecast, please contact Bob Wheeler via LinkedIn.


Comments

Popular posts from this blog

AMD Looks to Infinity for AI Interconnects

NVIDIA Networks NVLink

Ultra Ethernet Promises New RDMA Protocol