- CXL Memory Connectivity
- Smart Cable Modules
- Smart Retimers
Cloud Infrastructure Fleet Management Made Easy With COSMOS
Large server deployments for Artificial Intelligence (AI) and general-purpose computing in hyperscale data centers provide enormous benefits in terms of raw compute power, efficiency, and cost amortization. The on-demand nature and low up-front cost of cloud computing is attractive to an increasing number of enterprises. However, managing such a large fleet of systems presents complex…
Astera Labs’ Flexible CXL Product Suite Enables Low-Latency Memory Expansion
Artificial intelligence (AI) is the single most transformative technology impacting everyday lives. Data-intensive AI applications as well as in-memory databases, high performance computing (HPC) and high-performance file systems are driving the need for faster interconnects between CPUs, GPUs, TPUs, DPUs, SmartNICs and FPGAs. Low latency is also critical, especially for memory interconnects. Compute Express Link™…
Breaking Through the Memory Wall
The term “memory wall” was first coined in 1994 to define what was becoming an obvious problem at the time: processor performance was outpacing memory interconnect bandwidth. In other words, memory access was limiting compute performance. Almost 30 years later this statement still holds true, especially in memory-intensive applications such as artificial intelligence (AI) where…
Astera Labs Delivers Industry-First CXL Interop with DDR5-5600 Memory Modules
Earlier this year, we announced the launch of our Cloud-Scale Interop Lab for CXL to provide robust interoperability testing between our Leo Memory Connectivity Platform and a growing ecosystem of CXL supported CPUs, memory modules and operating systems. By providing this critical testing, we enable customers to deploy CXL-attached memory with confidence by minimizing interoperational…
Three Things to Know about Astera Labs’ Taurus Ethernet Smart Cable Module Diagnostic Capabilities
Today’s data centers are under pressure to keep up with the ever-increasing demand for data processing, transfer, and storage. This is especially true with the advent of generative artificial intelligence (AI) and the continued investments by more than 97% of organizations in big data and AI initiatives. To keep this data moving and easily accessible,…
The Generative AI Impact: Accelerating the Need for Intelligent Connectivity Solutions
We have entered the Age of Artificial Intelligence and Generative AI is developing at a rapid pace and becoming integral to our lives. According to Bank of America analysts, “just as the iPhone led to an explosion in the use of smartphones and phone apps, ChatGPT-like technology is revolutionizing AI”. Generative AI is changing every…
Cloud-Scale Infrastructure Fleet Management Made Easy with Aries Smart Retimers
Data centers today have a lot of servers, and within each server there is an abundance of storage, specialized accelerators, and networking/communications infrastructure. These represent tens of thousands of interconnected systems, and with the rise of hyperscalers and cloud service providers, the scale of data infrastructure is only expected to grow in the years to…
The Importance of Security Features in a CXL Memory Controller to Protect Mission-Critical Cloud Data
The explosion of modern applications such as Artificial Intelligence, Machine Learning and Deep Learning is changing the very nature of computing and transforming businesses. These applications have opened myriad ways for companies to improve their business development processes, operations, and security and to provide better customer experiences. To support these applications, platforms are being designed…
Deploy Purpose-Built Connectivity Solutions at Scale
Learn how at DesignCon 2023! We are excited to be heading back to DesignCon 2023, taking place January 31-February 2 at the Santa Clara Convention Center. We’ll be demonstrating our portfolio of purpose-built connectivity solutions that eliminate performance bottlenecks throughout the data center. Now, we’re making it easier than ever for you to deploy solutions…
Unlocking Cloud Server Performance with CXL
The increasing volume of data and complexity of models requires advancements in cloud server architecture to remove memory bottlenecks and unlock the performance for compute-intensive workloads, such as Artificial Intelligence and Machine Learning. Leo Memory Connectivity Platform for CXL 1.1 and 2.0 eliminates the memory bottlenecks inherent in today’s architectures and enables new heterogeneous infrastructure to increase performance and reduce costs for cloud-scale deployment.
Connectivity Is Key to Harnessing the Data Reshaping Our World
As our appetite for creating and consuming massive amounts of data continues to grow, so too will our need for increased cloud capacity to store and analyze this data. Additionally, the server connectivity backbone for data center infrastructure needs to evolve as complex AI and ML workloads become mainstream in the cloud.
Taurus Smart Cable Modules™: An Active + Smart Approach to 200/400/800GbE
Today’s data center networks are primarily serviced by 25G/lane Ethernet technology; however, these networks are quickly moving to 50G and 100G/lane to allow hyperscalers to add additional servers and switches to their Clos Network topologies and support data-intensive workloads such as AI and Machine Learning. This rapid growth in Ethernet port speed is causing a new set of challenges for design complexity and serviceability of hyperscale architectures.
The Evolution of the In-Vehicle Network
Interconnect technologies will play an important role in the overall connected car story to meet the needs of mass data transfer within the In-Vehicle Network. We have recently seen these types of challenges and a similar evolution in enterprise data centers, where intelligent systems running data-intensive workloads — such as Artificial Intelligence and Machine Learning — have drastically increased the overall design complexity.
Data Center Resource Disaggregation Drives Need for Cost-Effective 400/800-GbE Interconnects
As new compute-intensive machine learning (ML) and artificial intelligence (AI) workloads drive servers to adopt faster PCI Express® 5.0 Links, lower-latency cache-coherent protocols like Compute Express Link™ (CXL™), and a dizzying array of memory, storage, AI processor (AIP), smart NIC, FPGA, and GPU elements, so too is heterogeneous computing pushing the need for blazing-fast networks to interconnect the resources.
Seamless Transition to PCIe® 5.0 Technology in System Implementations
In this PCI-SIG® hosted technical webinar, Astera Labs’ engineers explore the changes between PCIe 4.0 and PCIe 5.0 specifications, including signal integrity and system design challenges, where the right balance must be found between PCB materials, connector types and the use of signal conditioning devices for practical compute topologies. Through an objective analysis, the goal is to provide the audience with a methodology to optimize signal and link integrity performance, present best practices for system board design to support PCIe 5.0 technology applications, and test for system level interoperation.
PCI Express® 5.0 Architecture Channel Insertion Loss Budget
The upgrade from PCIe® 4.0 to PCIe 5.0 doubles the bandwidth from 16GT/s to 32GT/s but also suffers greater attenuation per unit distance, despite the PCIe 5.0 specification increasing the total insertion loss budget to 36dB. After deducting the loss budget for CPU package, AIC, and CEM connector, merely 16dB system board budget remains. Within the remaining budget, engineers need to consider safety margin for board loss variations due to temperature and humidity.
Simulating with Retimers for PCIe® 5.0
The design solution space for high-speed serial links is becoming increasingly complex with increasing data rates, diverse channel topologies, and tuning parameters for active components. PCI Express® (PCIe®) 5.0, at 32 GT/s, is a particularly relevant example of an application whose design solution space can be a daunting problem to tackle, given the performance-cost requirements of its end equipment. This paper is intended to help system designers navigate these design challenges by providing a how-to guide for defining, executing, and analyzing system-level simulations, including PCIe 5.0 Root Complex (RC), Retimer, and End Point (EP).
PCIe Retimers to the Rescue Webinar: PCI Express® Specifications Reach Their Full Potential
In this PCI-SIG® hosted webinar, Kurt Lender of Intel and Casey Morrison of Astera Labs offer solutions to address signal-integrity and channel insertion loss challenges to ensure the full potential of the increased bandwidth offered by PCIe® Gen 4.0 and 5.0 are achieved.
As PCIe specifications continue to double the transfer rates of previous generations, the technology can address various needs for demanding applications, while signal-integrity and channel insertion loss challenges arise as well. Retimers are mixed-signal analog/digital devices that are protocol-aware and able to fully recover data, extract the embedded clock and retransmit a fresh copy of the data using a clean clock. These devices are fully defined in the PCI Express base specification, including compliance testing, and are used to combat issues that PCI Express faces.
PCI Express® Retimers vs. Redrivers: An Eye-Popping Difference
A redriver amplifies a signal, whereas a retimer retransmits a fresh copy of the signal. Retimers provide capabilities such as PCIe® protocol participation, lane-to-lane skew compensation, adaptive EQ, diagnostics features, etc. Therefore, retimers particularly address the need for reach extension in PCIe 4.0 & PCIe 5.0 systems, where increased number of PCIe slots, multiconnectors, and long physical topologies lead to signal integrity (SI) challenges.
The Impact of Bit Errors in PCI Express® Links: The Painful Realities of Low-Probability Events
PCIe 5.0 ushers in the era of >1Tbps of data bandwidth between two PCIe nodes, and noticeably greater Link Errors and DLLP Retries are likely to occur. By reducing insertion loss (shorter trace, better material, connectors, etc.) or adding retimers to some topologies, system designers can minimize system-level headaches with a target of 1E-17 or lower BER.