Astera Labs Delivers Industry-First CXL Interop with DDR5-5600 Memory Modules

Earlier this year, we announced the launch of our Cloud-Scale Interop Lab for CXL to provide robust interoperability testing between our Leo Memory Connectivity Platform and a growing ecosystem of CXL supported CPUs, memory modules and operating systems. By providing this critical testing, we enable customers to deploy CXL-attached memory with confidence by minimizing interoperational risk, reducing system development time and costs, and accelerating time-to-market.

At that time, we released Interop Reports for DDR5-4800 memory modules with leading vendors. We’re now excited to announce the availability of Interop Reports for the latest DDR5-5600 memory modules, demonstrating our commitment to delivering industry-leading performance for our customers.

Through close collaboration with our ecosystem partners including AMD, Intel, Micron, Samsung, and SK hynix, we have further optimized our Leo Memory Connectivity Platform to enable best-in-class performance gains with DDR5-5600 RDIMMs.

DDR5 memory modules support:

  • Higher transfer speed for RCD (registered clock driver) to retransmit instructions
  • PMIC (power management integrated circuit) to enhance power management and monitoring
  • Independent subchannels to improve data throughput in server applications
  • SPD (serial presence detect) to support sideband access to improve usability and telemetry of critical parameters
  • Temperature sensor ICs on the RDIMM to enable constant monitoring of module temperatures

These DDR5 features can create challenges for interoperability. For example, memory vendors use different combinations of RCDs and PMICs, thereby enlarging the RDIMM test matrix and scope of regression testing. With our firmware-defined solution, we’ve fine-tuned Leo to overcome these challenges.

Figure 1: DDR5 Module Layout

We’ve conducted rigorous electrical testing, application-level testing, analysis, and performance tuning on a wide-spectrum of RDIMMs with our Leo Memory Connectivity Platform.

Leo also comes with fleet management capabilities for cloud-scale deployments. With the Leo SDK, our Cloud-Scale Interop Lab can orchestrate firmware updates and functional tests throughout various lab environments. This ensures that all Leo SVBs have the most up-to-date firmware to validate new RDIMMs, and we continuously run regression tests against strategic RDIMM selections with different electrical properties from all major memory vendors.

Validating DDR5-5600 RDIMMs bring significant benefits for CXL-attached memory, including:

  • Higher Perf per RDIMM, lowering TCO (Total Cost of Ownership)
  • Increased memory bandwidth and lower latency to saturate PCIe 5.0 x16 CXL 1.1/2.0 link
  • Accelerated time-to-market with flexible supply chain

Rigorous interoperability testing

To ensure high confidence in our Leo Memory Connectivity Platform, we’ve developed robust orchestration and automation capabilities. Our comprehensive suite includes industry standard tools, such as MLC (Memory Latency Checker), to measure performance and ensure consistency across similar memory capacities, configurations and ranks. Based on our MLC tests, we can see that 64GB DDR5-5600 RDIMMs deliver ~14% boost in MLC performance compared to DDR5-4800 RDIMMs.

Figure 2: Relative MLC Performance


For the MLC test, each NUMA node was configured with 1DPC (DIMM per channel). Eight DDR5-4800 RDIMMs were populated per socket, running at 4800 MT/s and two DDR5-5600 RDIMMs were populated on the Leo SVB (System Validation Board). Below we can see, each NUMA node has 128GB of memory capacity, but Leo can support higher memory speeds.

Figure 3: NUMA nodes from Terminal

This test has been done with all the major memory vendors, supporting different capacities, speeds, and PMIC/RCD combinations. This is a proof point for the stability and performance that we have achieved with our production-ready Leo Memory Connectivity Platform.

Ecosystem Support

We are collaborating on interop testing with industry leaders delivering high-performance DDR5 memory modules for the expanding CXL market.

Siva Makineni, Vice President of Advance Memory Systems at Micron Technology, said: “We’re pleased to continue our collaboration with Astera Labs on interoperability testing, power and performance optimization to bring up the industry-first Cloud-Scale Interop Lab.”

Jangseok Choi, Vice President of New Business Planning Team at Samsung Electronics, said: “Our memory modules combined with CXL enable servers to expand memory capacity to tens of terabytes, and we’re excited to partner with Astera Labs to confirm our DDR5-5600 solution is interoperable with Leo Smart Memory Controller and various configurations of processors.”

Hyungsoo Kim, VP and Head of DRAM Application Engineering Group at SK hynix, said: “SK hynix is committed to providing customers with flexible CXL memory expansion with increased bandwidth and capacity. This collaboration with Astera Labs and SK hynix engineering teams from both headquarters and U.S. Engineering Center (UEC) to validate our DDR5-5600 memory modules with Astera Labs’ Leo Memory Connectivity Platform and CXL-capable CPUs is the next critical step in enabling a CXL ecosystem that meets the performance needs of our customers with the most advanced and innovative memory technologies such as DDR5.”


As CXL adoption gains momentum this year, we remain at the forefront, working closely with our customers and partners to deploy innovative CXL solutions.

Our support for DDR5-5600 RDIMMs on Leo Memory Connectivity Platform is a significant milestone for CXL innovation and a proof point for the industry for how disaggregated memory can offer significant memory bandwidth without compromising memory intensive workloads, such as AI, HPC, Big Data Analytics, and more. With our relentless pursuit of industry-leading performance and collaboration with key ecosystem partners, we continue to drive innovation and shape the future of memory connectivity.

To access the latest Interop Reports featuring DDR5-5600 RDIMMs, visit