Leo CXL™ Memory Accelerators
Astera Labs offers Leo CXL™ Memory Accelerators that overcome processor memory bandwidth bottlenecks and capacity limitations while offering built-in fleet management and deep diagnostic capabilities critical to enterprise data centers and cloud deployments.
The Leo Memory Accelerator Platform for Compute Express Link™ (CXL) 1.1/2.0 interconnects enables robust disaggregated memory pooling and expansion for processors, workload accelerators, and smart I/O devices. The Leo CXL Platform of ICs and hardware increases overall memory bandwidth by 32 GT/s per lane and capacity up to 2TB, maintains ultra-low latency, and provides server class RAS features for robust and reliable cloud scale operation.
- CXL Type-3 Device Platform Implementing CXL.mem for Memory Pooling and Memory Expansion
- CXL™ 2.0 Host Interface Operating up to 32 GT/s per Lane
- Enables Low-Latency Producer-Consumer Model for Multi-CPU Topologies
- Multiple DDRx Channels Supporting Multiple DIMM Slots to Aggregate 2TBs of Memory
- Secure DDR Memory Interface
- Purpose-Built for Low-Latency: CXL Fast Data Path, Smart Data Prefetch
- RAS Features: Advanced Error Correction Capabilities for Cloud Scale Deployments
- Advanced Analytics/Telemetry: Write/Read Latency Monitoring, Transaction Counters, Temperature Sensors
- Flip chip BGA package
The Astera Labs Difference
Leo CXL Memory Accelerators not only unlock the memory expansion and pooling benefits of the CXL.mem protocol, but also deliver cloud service providers tools and diagnostics essential for advanced fleet management.
Unlock the Full Potential of CXL™ with Purpose-Built Connectivity Solutions from Astera Labs
Meet the Leo CXL™ Memory Accelerator Platform and Aries CXL Smart Retimers – Astera Labs’ portfolio of solutions that unlock the full potential of data-centric systems based on Compute Express Link™ technology.
Leo CXL™ Memory Accelerator Platform Introduction and CXL Interop Demo
Meet the Leo CXL™ Memory Accelerator Platform for memory pooling and expansion applications; view our CXL interoperability demonstration using the Solstice 3U Riser Card.
CXL™ is needed to overcome CPU-memory and memory-storage bottlenecks faced by computer architects. CXL allows for a new memory fabric supporting various processors (CPUs, GPUs, DPUs) sharing heterogeneous memory. Future data centers need heterogeneous compute, new memory and storage hierarchy, and an agnostic interconnect to tie it all together.
Traditional DRAM and persistent storage class memory (SCM) are supported, allowing for flexibility between performance and cost.
Compute Express Link™ (CXL™) is an open interface that standardizes a high performance interconnect for data-centric platforms involving various XPUs. It provides a uniform means of connection to CPUs, GPUs, FPGAs, storage, memory, and networking.
The CXL™ protocol supports three different type of devices:
- Type 1 Caching Devices / Accelerators
- Type 2 Accelerators with Memory
- Type 3 Memory Buffer
- Memory tiering in which additional capacity is applied with a variable mix of lower-latency direct-attached memory and higher-latency large capacity memory
- Higher VM density per system by having more memory capacity attached
- Large databases can use a caching layer provided by SCM to improve the performance
- CXL.io is used for initialization, link-up, device discovery and enumeration, and register access. It provides a non-coherent load/store interface for I/O devices similar to PCIe® 5.0.
- CXL.cache defines interactions between a Host and Device, which allows CXL devices to cache host memory with low latency.
- CXL.mem provides a Host processor with direct access to Device-attached memory using load/store commands.
CXL™ runs on PCIe® 5.0 electrical signals. CXL runs on PCIe PHY and supports x16, x8, and x4 link widths natively.
CXL™ 2.0 adds support for switching, persistent memory, and security as well as memory pooling support to maximize memory utilization, reducing or eliminating the need to over-provision memory.
Connectivity Is Key to Harnessing Data
As our appetite for creating and consuming massive amounts of data continues to grow, so too will our need for increased cloud capacity to store and analyze this data. Additionally, the server connectivity backbone for data center infrastructure needs to evolve as complex AI and ML workloads become mainstream in the cloud.
Rapidly implement diverse system topologies using our plug-and-play connectivity system boards. Our solutions include:
- Smart Cable Modules: Active Copper-Based Solution to Address Reach, Signal Integrity and Bandwidth Utilization Issues for 100G/Lane Ethernet Switch-to-Switch and Switch-to-Server Interconnects.
- Riser Cards: Extend PCIe/CXL technology slots and enable incredibly complex multi-connector topologies.
- PCIe-Over-Cable Extender Cards: Connect a server head-node to a JBoF or JBoG without sacrificing speed.
- GPU Booster Cards: Support external graphics (eGPUs) and enhance the gaming experience.
Rapidly design and achieve signal integrity peace of mind and avoid wondering if a PCIe 4.0/5.0, CXL 1.1/2.0 or 100G/Lane Ethernet technology-based design will work. Our team can help develop first-pass design success and accelerate time to market for your systems and boards.