Astera Labs Video Library


Why we Test_ Leo Memory Connectivity Platform - asteralabs
How We Test: Leo Memory Connectivity Platform

Learn how our comprehensive interoperability testing reduces design challenges, so you can accelerate time-to-market, streamline development efforts and reduce costs for designing and deploying heterogeneous infrastructure with CXL technology.

Interop Testing with CXL 1.1 Host CPU’s and Popular DDR5 Memory Modules - asteralabs
Interop Testing with CXL 1.1 Host CPU’s and Popular DDR5 Memory Module

After establishing the foundation of our rigorous testing, we worked with our customers to determine the most popular memory configurations for their systems and applications, to which we’ve included in our initial interop reports. We include 64GB DDR5-4800 RDIMMs from Micron, Samsung, and SK Hynix, each of which are tested with CXL 1.1-capable CPUs from AMD and Intel.

Deploy Robust PCIe 5.0 Connectivity wit hAries Smart Retimers
Deploy Robust PCIe® 5.0 Connectivity with Aries Smart Retimers

See our Aries Smart Retimers in action via two interoperability demonstrations with key industry partners’ PCIe® 5.0 root complex and endpoints.

Implement Complex PCIeTopologies with Switches, SRIS Clocking & Aries Smart Retimers
Complex PCIe® Topologies with Switches, SRIS Clocking & Aries Smart Retimers

Learn about PCIe® switches and why certain complex system topologies involving switches need retimers to achieve optimal link performance.

Articles & Insights

Unlocking Cloud Server Performance with CXL

The increasing volume of data and complexity of models requires advancements in cloud server architecture to remove memory bottlenecks and unlock the performance for compute-intensive workloads, such as Artificial Intelligence and Machine Learning. Leo Memory Connectivity Platform for CXL 1.1 and 2.0 eliminates the memory bottlenecks inherent in today’s architectures and enables new heterogeneous infrastructure to increase performance and reduce costs for cloud-scale deployment.

Connectivity Is Key to Harnessing the Data Reshaping Our World

As our appetite for creating and consuming massive amounts of data continues to grow, so too will our need for increased cloud capacity to store and analyze this data. Additionally, the server connectivity backbone for data center infrastructure needs to evolve as complex AI and ML workloads become mainstream in the cloud.

Taurus Smart Cable Modules™: An Active + Smart Approach to 200/400/800GbE

Today’s data center networks are primarily serviced by 25G/lane Ethernet technology; however, these networks are quickly moving to 50G and 100G/lane to allow hyperscalers to add additional servers and switches to their Clos Network topologies and support data-intensive workloads such as AI and Machine Learning. This rapid growth in Ethernet port speed is causing a new set of challenges for design complexity and serviceability of hyperscale architectures.

The Evolution of the In-Vehicle Network

Interconnect technologies will play an important role in the overall connected car story to meet the needs of mass data transfer within the In-Vehicle Network. We have recently seen these types of challenges and a similar evolution in enterprise data centers, where intelligent systems running data-intensive workloads — such as Artificial Intelligence and Machine Learning — have drastically increased the overall design complexity.