- CXL Memory Connectivity
- CXL Technology
- Smart Cable Modules
- Smart Retimers
Top 5 reasons to meet with Astera Labs at Intel® Innovation
Partnering with Intel and Supermicro, Astera Labs will demonstrate CXL memory expansion using our industry-leading Leo Smart Memory Controllers running real-world workloads and compare performance benchmarks for direct attach memory vs. CXL attach memory implementation for large in-memory applications. This demonstration will also highlight how our Aries Smart Retimers enable longer PCIe (PCI Express) 5.0 and CXL 1.1. signal reach to implement complex topologies. Both Leo and Aries product lines are in advanced sampling stage and ready for deployment in Cloud servers.
Unlocking Cloud Server Performance with CXL
The increasing volume of data and complexity of models requires advancements in cloud server architecture to remove memory bottlenecks and unlock the performance for compute-intensive workloads, such as Artificial Intelligence and Machine Learning. Leo Memory Connectivity Platform for CXL 1.1 and 2.0 eliminates the memory bottlenecks inherent in today’s architectures and enables new heterogeneous infrastructure to increase performance and reduce costs for cloud-scale deployment.
Connectivity Is Key to Harnessing the Data Reshaping Our World
As our appetite for creating and consuming massive amounts of data continues to grow, so too will our need for increased cloud capacity to store and analyze this data. Additionally, the server connectivity backbone for data center infrastructure needs to evolve as complex AI and ML workloads become mainstream in the cloud.
Taurus Smart Cable Modules™: An Active + Smart Approach to 200/400/800GbE
Today’s data center networks are primarily serviced by 25G/lane Ethernet technology; however, these networks are quickly moving to 50G and 100G/lane to allow hyperscalers to add additional servers and switches to their Clos Network topologies and support data-intensive workloads such as AI and Machine Learning. This rapid growth in Ethernet port speed is causing a new set of challenges for design complexity and serviceability of hyperscale architectures.
The Evolution of the In-Vehicle Network
Interconnect technologies will play an important role in the overall connected car story to meet the needs of mass data transfer within the In-Vehicle Network. We have recently seen these types of challenges and a similar evolution in enterprise data centers, where intelligent systems running data-intensive workloads — such as Artificial Intelligence and Machine Learning — have drastically increased the overall design complexity.
Data Center Resource Disaggregation Drives Need for Cost-Effective 400/800-GbE Interconnects
As new compute-intensive machine learning (ML) and artificial intelligence (AI) workloads drive servers to adopt faster PCI Express® 5.0 Links, lower-latency cache-coherent protocols like Compute Express Link™ (CXL™), and a dizzying array of memory, storage, AI processor (AIP), smart NIC, FPGA, and GPU elements, so too is heterogeneous computing pushing the need for blazing-fast networks to interconnect the resources.
PCI Express® 5.0 Architecture Channel Insertion Loss Budget
The upgrade from PCIe® 4.0 to PCIe 5.0 doubles the bandwidth from 16GT/s to 32GT/s but also suffers greater attenuation per unit distance, despite the PCIe 5.0 specification increasing the total insertion loss budget to 36dB. After deducting the loss budget for CPU package, AIC, and CEM connector, merely 16dB system board budget remains. Within the remaining budget, engineers need to consider safety margin for board loss variations due to temperature and humidity.
Simulating with Retimers for PCIe® 5.0
The design solution space for high-speed serial links is becoming increasingly complex with increasing data rates, diverse channel topologies, and tuning parameters for active components. PCI Express® (PCIe®) 5.0, at 32 GT/s, is a particularly relevant example of an application whose design solution space can be a daunting problem to tackle, given the performance-cost requirements of its end equipment. This paper is intended to help system designers navigate these design challenges by providing a how-to guide for defining, executing, and analyzing system-level simulations, including PCIe 5.0 Root Complex (RC), Retimer, and End Point (EP).
PCI Express® Retimers vs. Redrivers: An Eye-Popping Difference
A redriver amplifies a signal, whereas a retimer retransmits a fresh copy of the signal. Retimers provide capabilities such as PCIe® protocol participation, lane-to-lane skew compensation, adaptive EQ, diagnostics features, etc. Therefore, retimers particularly address the need for reach extension in PCIe 4.0 & PCIe 5.0 systems, where increased number of PCIe slots, multiconnectors, and long physical topologies lead to signal integrity (SI) challenges.
The Impact of Bit Errors in PCI Express® Links: The Painful Realities of Low-Probability Events
PCIe 5.0 ushers in the era of >1Tbps of data bandwidth between two PCIe nodes, and noticeably greater Link Errors and DLLP Retries are likely to occur. By reducing insertion loss (shorter trace, better material, connectors, etc.) or adding retimers to some topologies, system designers can minimize system-level headaches with a target of 1E-17 or lower BER.