Videos

Astera Labs Video Library

Videos

Interop Testing with CXL 1.1 Host CPU’s and Popular DDR5 Memory Modules - asteralabs
Interop Testing with CXL 1.1 Host CPU’s and Popular DDR5 Memory Module

After establishing the foundation of our rigorous testing, we worked with our customers to determine the most popular memory configurations for their systems and applications, to which we’ve included in our initial interop reports. We include 64GB DDR5-4800 RDIMMs from Micron, Samsung, and SK Hynix, each of which are tested with CXL 1.1-capable CPUs from AMD and Intel.

Deploy Robust PCIe 5.0 Connectivity wit hAries Smart Retimers
Deploy Robust PCIe® 5.0 Connectivity with Aries Smart Retimers

See our Aries Smart Retimers in action via two interoperability demonstrations with key industry partners’ PCIe® 5.0 root complex and endpoints.

Implement Complex PCIeTopologies with Switches, SRIS Clocking & Aries Smart Retimers
Complex PCIe® Topologies with Switches, SRIS Clocking & Aries Smart Retimers

Learn about PCIe® switches and why certain complex system topologies involving switches need retimers to achieve optimal link performance.

why-we-test-interop
Why We Test

Interoperability testing of PCIe® retimers is critical for HPC and cloud applications to support new compute-intensive workloads – such as Artificial Intelligence (AI) and Machine Learning (ML).

Articles & Insights

taurus-qsfp-dd-3d-600x338-1
Taurus Smart Cable Modules™: An Active + Smart Approach to 200/400/800GbE

Today’s data center networks are primarily serviced by 25G/lane Ethernet technology; however, these networks are quickly moving to 50G and 100G/lane to allow hyperscalers to add additional servers and switches to their Clos Network topologies and support data-intensive workloads such as AI and Machine Learning. This rapid growth in Ethernet port speed is causing a new set of challenges for design complexity and serviceability of hyperscale architectures.

zonal-control-architecture
The Evolution of the In-Vehicle Network

Interconnect technologies will play an important role in the overall connected car story to meet the needs of mass data transfer within the In-Vehicle Network. We have recently seen these types of challenges and a similar evolution in enterprise data centers, where intelligent systems running data-intensive workloads — such as Artificial Intelligence and Machine Learning — have drastically increased the overall design complexity.

Featured image for Data Center Resource Disaggregation blog
Data Center Resource Disaggregation Drives Need for Cost-Effective 400/800-GbE Interconnects

As new compute-intensive machine learning (ML) and artificial intelligence (AI) workloads drive servers to adopt faster PCI Express® 5.0 Links, lower-latency cache-coherent protocols like Compute Express Link™ (CXL™), and a dizzying array of memory, storage, AI processor (AIP), smart NIC, FPGA, and GPU elements, so too is heterogeneous computing pushing the need for blazing-fast networks to interconnect the resources.

Seamless Transition to PCIe® 5.0 Technology in System Implementations

In this PCI-SIG® hosted technical webinar, Astera Labs’ engineers explore the changes between PCIe 4.0 and PCIe 5.0 specifications, including signal integrity and system design challenges, where the right balance must be found between PCB materials, connector types and the use of signal conditioning devices for practical compute topologies. Through an objective analysis, the goal is to provide the audience with a methodology to optimize signal and link integrity performance, present best practices for system board design to support PCIe 5.0 technology applications, and test for system level interoperation.