Leo CXL® Smart Memory Controllers

Purpose-built memory expansion, sharing, and pooling for AI and cloud platforms

Memory Solutions for the AI Era

  • Accelerates AI and cloud infrastructure with memory expansion, sharing, and pooling for enhanced performance
  • Eliminates bandwidth/capacity bottlenecks, reduces total cost of ownership, and optimizes memory utilization
  • Ensures end-to-end data integrity and protection with best-in-class industry standard security features
  • Delivers server-grade customizable RAS, advanced diagnostics, and fleet management via COSMOS suite

Leo Highlights

Accelerating AI with CXL Memory

Increase Memory
Capacity

With multiple DDR5 channels,
up to 5600 MT/s

Seamless
Interoperability Across

Stress tested with all major
xPUs and Memory vendors

Enhanced Diagnostics
& Telemetry

Advanced capabilities through in-band and out-of-band management

Leo Delivers Proven Application Value with CXL

Astera Labs’ Leo CXL Smart Memory Controller is the industry’s first purpose-built solution that supports both memory expansion and memory pooling to solve performance bottlenecks and capacity constraints in cloud servers.

AI Inferencing: Recommendation System

73% more recommendations per second

AI Inferencing: Chatbot Services

40% faster time to insights with LLM

HPC: Computer Aided Engineering

50% more iterations per second

In-Memory Databases: Transaction Processing

150% more transactions per second

E-Series

  • Memory Expansion

P-Series

  • Memory Expansion
  • Memory Pooling
  • Memory Sharing

Leo A-Series Hardware Solutions

Leo A-Series CXL Smart Memory Hardware solutions offer all the benefits of Leo Controllers and enable quick plug-and-play deployment with faster time-to-market for system OEMs and data centers.

  • PCIe x16 CEM add-in card form factor
  • Up to 4x DDR5 RDIMMs supporting up to 2TB
  • On-board debug connectors for fleet management on Cloud Servers
  • Temperature and health monitoring of Leo controller and memory
  • RDIMM fault isolation and error correction
  • High volume production-qualified solutions with robust supply chain

Use Cases

Read more

Memory Disaggregation

Expand, pool and share memory between multiple servers to increase memory bandwidth and capacity while providing the option to reclaim stranded or under-utilized…

Resources

Ordering Information

Orderable Part NumberImageDocumentsDescriptionCXL SpecCXL LinkMemoryCapacityExpansionPooling / SharingOrdering
A1000-1254ABA1000-1254ABhttps://www.asteralabs.com/wp-content/uploads/2025/05/Leo_CXL_Smart_Memory_Controllers_Portfolio_Brief.pdfLeo A1000 CXL™ Smart Memory Add-in Card CXL 1.1/2.016x32G4x DDR5-5600 RDIMM slots, 2TB2TBYesYeshttps://www.asteralabs.com/product-details/a1000-1254ab/
CM5082E-*CM5082E-*https://www.asteralabs.com/wp-content/uploads/2025/05/Leo_CXL_Smart_Memory_Controllers_Portfolio_Brief.pdfLeo E-Series CXL 2.0 x8 Smart Memory ControllerCXL 1.1/2.08x32G2ch DDR5 Up To 5600MT/s2TBYesNohttps://www.asteralabs.com/product-details/cm5082e/
CM5162E-*CM5162E-*https://www.asteralabs.com/wp-content/uploads/2025/05/Leo_CXL_Smart_Memory_Controllers_Portfolio_Brief.pdfLeo E-Series CXL 2.0 x16 Smart Memory Controller CXL 1.1/2.016x32G2ch DDR5 Up To 5600MT/s2TBYesNohttps://www.asteralabs.com/product-details/cm5162e/
CM5162P-*CM5162P-*https://www.asteralabs.com/wp-content/uploads/2025/05/Leo_CXL_Smart_Memory_Controllers_Portfolio_Brief.pdfLeo P-Series CXL 2.0 x16 Smart Memory Controller CXL 1.1/2.016x32G2ch DDR5 Up To 5600MT/s2TBYesYeshttps://www.asteralabs.com/product-details/cm5162p/

Breaking the 100 GB/s Barrier: Astera Labs and Micron Demonstrate Production-Ready PCIe 6 Storage Performance at FMS 2025

The future of AI infrastructure arrived at the Future of Memory and Storage (FMS) 2025 conference, where Astera Labs and Micron achieved a groundbreaking milestone: over 100 GB/s of sustained throughput using production-ready PCIe 6 technology. This industry-first demonstration represents a transformative advancement in AI storage performance, featuring four Micron 9650 PCIe Gen6 SSDs and… Read more

Astera Labs at FMS 2025: Accelerating Storage and Memory Innovation in the AI Infrastructure 2.0 Era

As AI models push computational boundaries with breakthrough reasoning capabilities, storage and memory must also evolve to optimize AI Infrastructure 2.0. From training massive models to enabling real-time inference, every part of the AI workflow relies on seamless connectivity between diverse storage and memory technologies.Join Astera Labs at the Future of Memory and Storage Summit… Read more

Astera Labs at OCP APAC Summit: Advancing Open AI Infrastructure 2.0 Through Rack-Scale Connectivity

As AI training clusters scale to 200,000+ GPUs, traditional server architectures require a fundamental paradigm shift to handle this unprecedented scale. Join Astera Labs at the OCP APAC Summit, August 5-6 in Taipei, as we put a spotlight on the transition to AI Infrastructure 2.0—where the rack is replacing the server as the new unit of compute.This transformation isn’t just evolutionary—it’s… Read more

Astera Labs Announces Third Quarter 2025 Financial Conference Participation

SAN JOSE, CA, U.S. – Aug. 20, 2025 – Astera Labs, Inc. (Nasdaq: ALAB), a leader in semiconductor-based connectivity solutions for rack-scale AI infrastructure, today announced its participation in financial conferences for the third quarter 2025.Deutsche Bank 2025 Technology Conference on Aug. 28, 2025. Astera Labs’ presentation is scheduled for 12:30 pm PT.Citi’s 2025 Global… Read more

Astera Labs Announces Financial Results for the Second Quarter of Fiscal Year 2025

Record quarterly revenue of $191.9 million, up 20% QoQ and 150% YoY, and record operating cash flow generation of $135.4 millionIndustry leading PCIe 6 connectivity portfolio ramping in volume on customized rack-scale AI systemsScorpio Fabric Switch design wins expand across multiple new customers and applicationsSAN JOSE, CA, U.S. – August 5, 2025 – Astera Labs, Inc. (Nasdaq:… Read more

Astera Labs Opens New Global Headquarters in San Jose to Accelerate AI Infrastructure Innovation

900-employee campus powers Astera Labs’ mission to usher in the rack-scale computing eraSAN JOSE, Calif.– July 18, 2025 – Astera Labs, Inc. (Nasdaq: ALAB), a provider of semiconductor connectivity solutions for AI and cloud infrastructure, today announced the opening of its new corporate headquarters in San Jose, California. Designed to accommodate up to 900 employees, the new… Read more

Astera Labs Announces Conference Call to Review Second Quarter 2025 Financial Results

SAN JOSE, Calif., July 08, 2025 — Astera Labs, Inc. (Nasdaq: ALAB), a global leader in semiconductor-based connectivity solutions for AI and cloud infrastructure, today announced that it will release its financial results for the second quarter 2025 after the close of market on Tuesday, Aug. 5, 2025. Astera Labs will host a corresponding conference call at 1:30 p.m. Pacific Time, 4:30… Read more

Rack-Scale AI Connectivity with UALink

When the ecosystem works together, innovation happens faster. As a founding board member of the UALink™ Consortium, learn how Astera Labs – in collaboration with XPU partners – plans to deliver a complete portfolio of UALink products to support scale-up AI infrastructure. Read more

Leading the AI Infrastructure 2.0 Era

AI has outgrown the server. The rack is now the unit of compute — a tightly integrated, AI-optimized system. Welcome to AI Infrastructure 2.0.Astera Labs’ co-founders share how the company is leading the transition to AI Infrastructure 2.0 with a complete portfolio of purpose-built silicon hardware and software solutions grounded in open standards. Read more

Accelerate AI with Real-World CXL Platforms

We collaborated with SMART Modular Technologies to showcase our Leo CXL™ Smart Memory Controller in a real-world setup that delivered up to 2TB of additional CXL memory with SMART CXL add-in cards.In the demo, Large Language Model inferencing tasks were run using FlexGen, achieving 5.5× higher throughput and 90% GPU utilization—demonstrating the efficiency gains CXL brings to AI workloads…. Read more