Aries PCIe®/CXL® Smart Cable Modules

Multi-rack GPU clustering for AI
over copper cables

Unleash GPU-to-GPU Clustering Across Racks

  • Extends PCIe® 6.0, 5.0, PCIe 4.0, and CXL® signal reach across dense AI racks using Active Electrical Cables
  • Built on the proven track record of our widely deployed and field-tested Aries PCIe/CXL Smart DSP Retimers
  • Supports multiple form factors and configurations to accommodate diverse AI system topologies
  • COSMOS suite for performance optimization and seamless fleet management across dense AI platforms

Aries SCM Highlights

Enabling reliable PCIe/CXL for rack-to-rack connectivity

Robust Signal
Integrity

Over copper cables with flexible link bifurcation

Extended Cable
Reach By

Compared to passive DAC solutions

Enhanced Diagnostics
& Telemetry

Advanced capabilities through in-band and out-of-band management

Why Use Aries Smart Cable Modules?

Aries SCMs enable robust, easy-to-design PCIe and CXL cabling while also offering COSMOS suite of built-in advanced fleet management, security, and deep diagnostic capabilities critical to ensuring high reliability and up-time.

Robustness

  • Supported Data Rates

    64 GT/s, 32 GT/s, 16 GT/s, 8 GT/s, 5 GT/s, and 2.5 GT/s Data Rates with Automatic Link Equalization

  • Power Efficient

    Low power CMOS process and L1.0 Low-Power Mode minimizes rack power/thermal density

  • Low Latency

    Less than 10ns

  • Security

    Advanced security features help prevent malicious attacks of unauthorized firmware loading, module diagnostics access, and configurations

Ease-of-use

  • Thin Cable Gauge

    Offering thin cables in various lengths for flexible bend radius

  • Extended Reach

    Up to 7m for long reach inter-rack PCIe connectivity

  • Flexible Supply Chain

    Aries SCMs are compatible with multiple cable vendors for ease of second sourcing of active cable assemblies and accelerated time to market

  • Interop Testing

    Rigorous system testing with 50+ Endpoints and all major Root Complex

Fleet Management

  • Quick Debug

    Built-in protocol analyzer with Link state history and timestamps, full non-destructive eye scan for RX Lane margining, self-test features to minimize link downtime and accelerate fault isolation

  • Deep Diagnostics

    Firmware-driven link health monitoring to alert BMC of any possible link performance issues

Use Cases

Read more

Memory Disaggregation

Expand, pool and share memory between multiple servers to increase memory bandwidth and capacity while providing the option to reclaim stranded or under-utilized…

Resources

Ordering Information

Part NumberDocumentsPCIe Gen PCIe LanesOrderingProduction Status
PM30-6XXPortfolio BriefPCIe 6.xVariousContact Us Pre-Production
PM20-5XXPortfolio BriefPCIe 5.0VariousContact UsProduction

Why Your Mixture-of-Experts Model Is Only as Good as Your Fabric

Introducing Hypercast™ for Improved Intelligence Benchmarks and Tokens-Per-Watt PerformanceThe more capable you make a frontier AI model, the harder it becomes to run. More parameters, more experts, more sophisticated routing: every architectural improvement that lifts benchmark scores also lifts the communication demands on the hardware connecting compute accelerators such as GPUs…. Read more

Bridge the PCIe 6 Transition Without Requalifying Your NIC Fleet

How Aries 6 PCIe Smart Gearbox enables hyperscalers to scale I/O for AI workloads while preserving qualified PCIe 5 frontend NIC infrastructureAI Infrastructure Is Outgrowing Yesterday’s I/O ArchitectureAI training and inference workloads don’t just demand faster GPUs—they demand faster systems. As model sizes and datasets explode, every critical path matters: moving data into… Read more

What NVIDIA GTC 2026 Said About the Future of AI Connectivity

NVIDIA GTC has always been a window into where the AI Infrastructure industry is heading. This year, what that window revealed was a compute layer fragmenting, deliberately and by design. Jensen Huang’s keynote introduced not one new architecture but several: Vera Rubin for high-throughput GPU compute, the Groq LPU integrated as a decode accelerator for latency-sensitive inference, and a… Read more

Astera Labs Extends Leadership in Open, AI Scale-Up Networking with New 320 Lane Scorpio X-Series Smart Fabric Switch

Now Shipping to Leading Hyperscalers, Scorpio Smart Fabric Switch Family Delivers Breakthrough Accelerator Utilization Through Memory-Semantic Based Open and Platform-Specific ProtocolsNews Highlights:Largest open, memory-semantic fabric switch: The Scorpio™ X-Series 320 Lane AI fabric switch, shipping today, supports increased scale-up cluster sizes with low latencyIntelligent… Read more

Astera Labs Reports First Quarter 2026 Financial Results

Record quarterly revenue of $308.4 million, up 14% QoQ and up 93% year-over-yearMarket-leading PCIe 6 AI fabric and signal conditioning portfolio delivered strong growth during Q1Now shipping newly announced Scorpio™ X-Series 320-lane AI Fabric switch and expanded Scorpio P-Series PCIe 6 switch family supporting 32 to 320 lanesSAN JOSE, CA, U.S. – May 5, 2026 – Astera Labs,… Read more

Astera Labs Announces Conference Call to Review First Quarter 2026 Financial Results

SAN JOSE, CA, U.S. – April 2, 2026 – Astera Labs, Inc. (Nasdaq: ALAB), a leader in semiconductor-based connectivity solutions for rack-scale AI infrastructure, today announced that it will release its financial results for the first quarter 2026 after the close of market on Tuesday, May 5, 2026. Astera Labs will host a corresponding conference call at 1:30 p.m. Pacific Time, 4:30… Read more

Astera Labs Reports Fourth Quarter and Full Year 2025 Financial Results

Record quarterly revenue of $270.6 million, up 17% QoQ, and record full-year revenue of $852.5 million, up 115% year-over-yearBroadening Scorpio X-Series smart fabric roadmap to address expanding scale-up market opportunities supporting multiple customers, starting production ramp for lead platformAppointed Desmond Lynch as Chief Financial Officer with Mike Tate transitioning to the… Read more

Scorpio X-Series 320 Lane: Largest Open, Memory-Semantic Fabric Switch

Get your first look at the new Scorpio™ X-Series 320-Lane AI Fabric Switch! The frontier models driving today’s most demanding AI applications require connectivity infrastructure that keeps pace with the accelerators powering them.High-radix AI fabric switch replaces multiple legacy switches to enable larger scale-up cluster sizes in a single hop and reduce overall latencyHardware-accelerated… Read more

Scorpio P-Series 32 to 320 Lane: Broadest Family of PCIe 6 Fabric Switches

As GPU clusters scale for frontier AI, the fabric matters. Performance, scalability, and resilience all depend on how CPUs, GPUs, NICs, storage, and memory connect—and that connection is PCIe. This is exactly what Scorpio™ P-Series Fabric Switches are built for. Introducing the expanded Scorpio P-Series PCIe fabric switch family—now spanning 32 to 320 lane configurations—designed… Read more

Extending Reach of Scale-out Networks for AI Clusters

See our latest demo where our Scorpio P-Series Fabric Switches connect NVIDIA H200 GPUs and ConnectX-8 NICs – running real NCCL-based training workloads at max data rates. Our Taurus Ethernet-enabled AECs then extend reach up to 7m with full interoperability with NVIDIA Spectrum-X for switch-to-switch and ConnectX-8 for switch-to-NIC connectivity at 800G per linkLearn how this demo delivers:✅… Read more