Every year at GTC, the conversations on the show floor and in the hallways tell you something that raw specs alone can’t. This year, what I’m hearing is a clear inflection: the era of one-size-fits-all compute is over. Hyperscalers are deploying heterogeneous racks where NVIDIA GPUs sit alongside custom accelerators, XPUs, and purpose-built silicon developed for specific workload profiles—and the accelerator mix is only going to keep multiplying. That diversity is what makes connectivity so critical right now. Each architecture brings its own interface requirements, performance characteristics and protocol features, and the connectivity joining those components has to be optimally designed for it. That’s where connectivity stops being infrastructure and starts being a source of competitive differentiation.
NVLink Fusion: built for heterogeneous AI infrastructure
Our collaboration with NVIDIA on NVLink Fusion, announced last May, reflects exactly where this market is headed. NVLink Fusion enables non-NVIDIA XPUs to connect to NVIDIA GPUs using NVLink—the same proven, high-bandwidth, low-latency, memory-semantic interconnect that powers NVIDIA’s most advanced systems.
For hyperscalers, this presents a significant advantage: they can leverage the same scale-up infrastructure across diverse XPU and NVIDIA GPU architectures. This unified approach simplifies deployment, streamlines management, and maximizes infrastructure investment as the accelerator landscape evolves.
This collaboration builds on a multi-generational partnership. Astera Labs’ Aries PCIe/CXL Smart DSP Retimers have been deployed at volume across NVIDIA Hopper, Blackwell (B200) and several MGX and HGX platforms. At GTC 2025, we demonstrated the industry’s first end-to-end PCIe 6 interoperability with Scorpio P-Series Fabric Switches, Aries 6 Retimers, and NVIDIA Blackwell GPUs. Scorpio P-Series has also been integrated into the NVIDIA MGX platform for PCIe 6-ready modular designs. NVLink Fusion extends that foundation into the heterogeneous, custom-silicon era of AI infrastructure.
What this means if you’re building AI infrastructure
For hyperscalers evaluating next-generation deployments: our integration work across multiple generations means you have a proven connectivity partner with proven PCIe 6 interoperability and NVLink Fusion capability that extends the performance envelope of customized NVIDIA rack-scale platforms.
If you’re exploring heterogeneous architectures that blend NVIDIA GPUs with custom or third-party accelerators, we’re delighted to support you with NVLink Fusion, which gives you a path to scale-up multiple compute solutions without sacrificing the scalability and performance that large model training and inference demands.
And if your architecture requires something beyond off-the-shelf connectivity—complete custom solutions tailored to your specific silicon—that’s the conversation our team is built to have.
If you’re at GTC this week, we’d love to connect! Reach out to our team to go deeper on what NVLink Fusion could mean for your architecture.