Astera Labs
  • Applications
  • Products
    • Product Overview

      Built in the cloud, for the cloud.

      • Hardware Solutions
      • Design Services
      • Buy and Sample
    • Smart Retimers
      • Aries PCIe®/CXL™ Smart Retimers

        Industry-proven Smart Retimers for PCI Express® (PCIe) 4.0, PCIe 5.0, and Compute Express Link™ (CXL) systems.

    • Smart Cable Modules
      • Taurus Ethernet Smart Cable Modules™ New

        Smart Cable Modules overcome reach, signal integrity and bandwidth utilization issues for 100G/Lane Ethernet connectivity.

    • CXL Memory Accelerators
      • Leo CXL Memory Accelerator Coming Soon

        CXL Memory Accelerators overcome processor memory bandwidth bottlenecks and capacity limitations for CXL 1.1/2.0 interconnects.

  • Interop Lab
  • Technology Insights
  • Careers
  • About
    • About Us
    • Team
    • Support Portal    
    • Quality
    • News & Articles
    • Events
  • Contact

Technology Insights

Home » Technology Insights

Astera Labs Technology Insights

Learn how to Solve Connectivity Bottlenecks Throughout the Data Center

Technology Insights
Document Library
FAQs
Video Center
Webinars
Technology Insights
insertion-loss-budget-for-system-board
PCI Express® 5.0 Architecture Channel Insertion Loss Budget

The upgrade from PCIe® 4.0 to PCIe 5.0 doubles the bandwidth from 16GT/s to 32GT/s but also suffers greater attenuation per unit distance, despite the PCIe 5.0 specification increasing the total insertion loss budget to 36dB. After deducting the loss budget for CPU package, AIC, and CEM connector, merely 16dB system board budget remains. Within the remaining budget, engineers need to consider safety margin for board loss variations due to temperature and humidity.

Retimer to EP segment
Simulating with Retimers for PCIe® 5.0

The design solution space for high-speed serial links is becoming increasingly complex with increasing data rates, diverse channel topologies, and tuning parameters for active components. PCI Express® (PCIe®) 5.0, at 32 GT/s, is a particularly relevant example of an application whose design solution space can be a daunting problem to tackle, given the performance-cost requirements of its end equipment. This paper is intended to help system designers navigate these design challenges by providing a how-to guide for defining, executing, and analyzing system-level simulations, including PCIe 5.0 Root Complex (RC), Retimer, and End Point (EP).

redrivers
PCI Express® Retimers vs. Redrivers: An Eye-Popping Difference

A redriver amplifies a signal, whereas a retimer retransmits a fresh copy of the signal. Retimers provide capabilities such as PCIe® protocol participation, lane-to-lane skew compensation, adaptive EQ, diagnostics features, etc. Therefore, retimers particularly address the need for reach extension in PCIe 4.0 & PCIe 5.0 systems, where increased number of PCIe slots, multiconnectors, and long physical topologies lead to signal integrity (SI) challenges.

signal-integrity-challenges-low-probability
The Impact of Bit Errors in PCI Express® Links: The Painful Realities of Low-Probability Events

PCIe 5.0 ushers in the era of >1Tbps of data bandwidth between two PCIe nodes, and noticeably greater Link Errors and DLLP Retries are likely to occur. By reducing insertion loss (shorter trace, better material, connectors, etc.) or adding retimers to some topologies, system designers can minimize system-level headaches with a target of 1E-17 or lower BER.

  • « Previous
  • 1
  • 2
Load More loader
Document Library

Application Notes

NameDescriptionTypeDownload
  Fleet Management Made EasyThe Aries Smart Retimer portfolio offers unique features to support multiple PCI Express® and Compute Express Links™ in a system ranging from x16 to x2 width and running at 4.0 (16 GT/s) and 5.0 (32 GT/s) speeds. See how Aries' unique feature set and C-SDK collateral enables a powerful array of Link health monitoring tools for data center server fleet management.White Paper  Request Access
  Aries Compliance TestingThis guide shows how to perform PCIe Transmitter and Receiver compliance tests to ensure your system meets PCI-SIG specifications.Application Note  Request Access
  Aries CScripts TestingThis guide shows how to use the Astera Labs plug-in for CScripts to automate system-level tests of PCIe Links in an Intel-based system. CScripts is a collection of Python scripts which perform tests targeted at exercising different aspects of the PCIe Link Training and Status State Machine (LTSSM).Application Note  Request Access
  Aries IOMTThis guide shows how to use Intel I/O Margin Tool (IOMT) measure I/O performance in an Intel-based server with Aries Smart Retimers’ built-in loopback mode.Application Note  Request Access
  Aries PRBS TestingThis guide shows how to use Aries Smart Retimers’ built-in pseudo-random bit sequence (PRBS) pattern generators and checkers to perform physical-layer stress tests and monitor per-lane margins and bit error rate.Application Note  Request Access
  Aries Pre-RMA ChecklistResolving potential quality issues is a top priority. This step-by-step guide will help to gather critical information in-system prior to initiating an RMA.Application Note  Request Access
  Aries Preset Sweep TestingThis guide shows how to use the Python-SDK to automatically sweep over all Transmitter preset settings to capture the bit error rate (BER), margin information, and other useful performance metrics in a loopback configuration.Application Note  Request Access
  Aries RX Lane MarginingThe PCIe Base Specification has a provision for collecting Receiver margin information from all Receivers in a system during the L0 state of a Link using in-band Control Skip Ordered Sets at 16 GT/s and 32 GT/s. This guide shows how the Aries Smart Retimers supports Lane Margining for both timing and voltage, and an example with the Intel Lane Margining Tool (LMT) is provided.Application Note  Request Access
  Aries Security and RobustnessThis guide covers ways to use the Aries Smart Retimer and the associated C-SDK collateral in a system where security and robustness are critical aspects of maximizing system performance and up-time.Application Note  Request Access
  Aries Self TestThis guide shows how to use the Aries Smart Retimer built-in self-test feature for diagnosing situations where a device is suspected to be damaged or non-functional, possibly due to electrical/thermal over-stress, mechanical damage, etc.Application Note  Request Access
FAQs

400G/800G Ethernet FAQ

What are the design challenges for 400G/800G ethernet applications using Direct Attached Copper (DAC)?
  1. At 50Gbps/lane, passive direct-attach copper (DAC) cables barely reach 3-meters. At 100Gbps/lane, DACs may only have a 2-meter practical reach limit.
  2. The switch PCB consumes too much of the channel budget, which then limits the cable reach and increases cable gauge.
  3. DACs are rigid, heavy, and bulky, restricting airflow for system cooling and making rack servicing difficult.

Read more

What are the design challenges for 400G/800G ethernet applications using optical interconnects?
  1. Optical modules have high power consumption. A 400G module consumes around 12W and a 800G module may consume up to 20W.
  2. Optical modules require advanced low loss materials which are expensive.
  3. Optical modules have a shorter lifespan and are less reliable compared to active copper cables. Data center operators need to constantly maintain and replace the failed optical modules.

Read more

What are the potential solutions to address high speed switch-to-server interconnects?
  1. Active Optical Cables (AOC) can be used for rate conversion and to achieve thin wire profile. However, such optical designs incur additional costs, reliability concerns, and require more power.
  2. Active Copper Cables (ACC) can be used for rate conversion, have a lower design cost when compared to AOC while also supporting even thinner gauge cabling as compared to passive DACs. General purpose ACCs are limited by their lack of diagnostics and security features.
  3. Smart Electrical Cables (SEC) that utilize Taurus Smart Cable Modules have all the benefits of an ACC with the added “”smarts”” required by Cloud Service Providers.

Read more

What are the typical use cases for the Taurus Smart Cable Module?
  1. Switch-to-server: ToR Switches to Network Interface Cards (NIC) interconnects on a server.
  2. Switch-to-switch: within a spine switch and spine switch to Exit Leaf interconnects.

Get product info

What are the rate conversions for Taurus Smart Cable Modules?

Taurus Smart Cable Modules can provide gearbox functionality at 200GbE from 4x50G to 8x25G.

Download Product Brief

What are the challenges in high speed switch-to-server interconnects?
  1. Rate mismatches between NIC and switch lead to wasted switch bandwidth.
  2. Traditional DAC interconnects are too short, thick and bulky to handle high speed ethernet signals between ToR switches and multiple racks.

Read more

How could a Taurus Smart Cable Module overcome design challenges of DAC interconnects?

Smart Electrical Cables (SEC) support longer reach and thinner cabling while adding security and diagnostic capability.

Read more

How can Cloud Service Providers maximize TOR port speeds while still using previous generation servers?

A Taurus Smart Cable Module with gearbox capability can be used on the NIC to resolve the per-lane rate disparity and reduce the end-to-end channel loss, thereby increasing the cable reach and/or reducing cable gauge.

Get product info

What is the typical channel loss for a DAC cable?

In an average 3m 34 AWG copper wire, the typical channel loss is about 28dB at 12.9GHz, but might be as high as 36dB during worst cases.

Read more

What fleet management features are supported by Taurus Smart Cable Modules?

Taurus Smart Cable Modules’ advanced fleet management capabilities include Full CMIS Features, Security, and Extensive Diagnostics (Cable Degradation Monitoring, Host-Cable Security, Multiple Loopback Modes, and Pattern Generation/Checking)

Download Product Brief

How can the Taurus Smart Cable Module adapt to different applications?

Taurus offers various firmware and setting updates to adapt to diverse system topologies, including firmware flexibility, in-field upgrade support, health monitoring and debug, and CMIS extension.

Download Product Brief

What firmware settings can be updated in the Taurus SCM?

A user can update module management functions, adaption algorithms, and full-module firmware even after the cable is deployed to the switch system.

Download Product Brief

How is a firmware update for Taurus SCM done?

We offer complete CMIS Firmware update procedures in the product datasheet.

What is the difference between PAM4 and NRZ?
  1. NRZ is a modulation technique that has two voltage levels to represent logic 0 and logic 1. PAM4 uses four voltage levels to represent four combinations of two bits logic – 11, 10, 01, and 00.
  2. PAM4 has the advantages of halving the Nyquist frequency and doubling the throughput for the same Baud rate. This alleviates the need for designers to have to invent infrastructure like silicon and cables that would go up to 50GHz bandwidth.
  3. The SNR loss of a PAM4 signal compared to an NRZ signal is ~9.5 dB.

Download Intel App Note

Smart Retimer FAQ

What are the benefits of Aries PCIe Smart Retimers compared to general-purpose retimers?

Astera Labs Aries PCIe Smart Retimers offer exceptional robustness, ease-of-use and a list of Fleet Management capabilities. Get more details >

How to determine if a retimer is required?

There are generally three ways to approach this:

  1. Channel Loss Budget Analysis
  2. Simulate channel s-parameter in the Statistical Eye Analysis Simulator (SeaSim) tool to determine if post-equalized eye height (EH) and eye width (EW) meet the minimum eye opening requirements: ≥15 mV EH and ≥0.3 UI EW at Bit Error Ratio (BER) ≤ 10-12.
  3. Consider your cost threshold for system upgrades

View Signal Integrity Challenges for PCIe 5.0 OCP Topologies Video >

What are the differences between Retimers and Redrivers?

A redriver amplifies a signal, whereas a retimer retransmits a fresh copy of the signal.

Get a Detailed Comparison >

How much dB can a Retimer support?

For PCIe 5.0, 36 dB pre channel and 36 dB post channel. So ideally, with one retimer, the total loss from Root Complex to End Point is 72 dB. And with two retimers cascaded, the total loss from Root Complex to End Point is 108 dB, leaving 10-20% margin from a system design point of view.

How to fine tune a Retimer EQ setting?

There is no need to fine tune a retimer EQ setting as it participates in Link Equalization with Root Complex and End Points and automatically fine tunes the receiver EQ.

What is the maximum number of cascaded Retimers allowed?

The maximum number to cascade retimers in a link is 2, which is defined in PCIe specification.

Get a Detailed Explanation on the PCI-SIG blog >

If equalization can be bypassed in Retimers in PCIe 5.0 architecture, how would an Endpoint (EP) detect if there is a Retimer present?

Even when equalization is bypassed, a Retimer will still assert the Retimer Present bit (TS2 symbol 5, bit 4) in 2.5 GT/s data rate so that the Root Complex and EP can learn that a Retimer is present in the link.

Are there special considerations during link training to avoid timeouts when using Retimers?

There are no “special” considerations. During Equalization, the Retimer’s upstream pseudo port (USPP) and the Endpoint will simultaneously train their receivers, with a total time of 24ms to do this. The will also happen with the downstream pseudo port (DSPP) and the root complex. The timeouts are the same regardless of whether a Retimer is present or not.

Is a Retimer essentially a two-port PCIe packet switch?

Not quite, each port of a packet switch has a full PCIe protocol stack:
Physical Layer, Data Link Layer, and Transaction Layer.

A packet switch has at least one root port and at least one non-root port.

A Retimer, by contrast, has an upstream-facing Physical Layer and a downstream-facing Physical Layer but no Data Link or Transaction Layer.

As such, a Retimer’s ports are considered pseudo ports because a Retimer does not have — nor does it need — these higher-logic layers, the latency through a Retimer is much smaller compared to the latency through a packet switch.

Is there a difference in Retimer functionality from PCIe 5.0 specification compared to PCIe 4.0 specification?

The only notable differences are:

  • As with all PCIe 5.0 transmitters, the Retimer’s transmitters must support 32 GT/s precoding when requested by the link partner.
  • As with all PCIe 5.0 receivers, the Retimer’s receivers must support Lane Margining in both time and voltage.
Other than keeping the same throughput, is a Retimer required to support different link widths for its upstream/downstream ports?

A Retimer is required to have the same link width on its upstream-facing port and on its downstream-facing port. In other words, the link widths must match. A Retimer must also support down-configured link widths, but the width must always be the same on both ports.

Why is a Redriver not recommended for the PCIe 5.0 and PCIe 4.0 Specifications?

Redrivers are not defined or specified within the PCIe Base Specification, so there are no formal guidelines for using a Redriver versus using a Retimer. This topic is covered in more detail in this article:

PCI Express® Retimers vs. Redrivers: An Eye-Popping Difference.

Do you suggest putting the Retimer close to the receiver?

A Retimer’s transmitters and receivers, on both pseudo ports, must meet the PCIe Base Specifications. This means that a Retimer can support the full channel budget (nominally 36 dB at 16 GHz) on both sides — before and after the Retimer. Calculating the insertion loss (IL) budget should be done separately for each side of the Retimer, and channel compliance should be performed for each side as well, just as you would do for a Retimer-less Root-Complex-to-Endpoint link.

If a Redriver or Retimer is present, is there any way to enable or disable the Redriver or Retimer?

Redrivers and Retimers are active components which impact the data stream: their package imposes signal attenuation, their active circuits apply boost, and (in the case of Retimers) clock and data recovery. As such, there is no way to truly disable these components and still have data pass through. When disabled, no data will pass through a Redriver or Retimer.

How to decide between enhanced PCB material or Retimers to solve signal integrity issues?
  1. Determine if a Retimer is needed based on different PCB materials
  2. Define a simulation space, and identify worst-case conditions (temperature, humidity, impedance, etc.), minimum set of parameters (e.g., Transmitter Presets)
  3. Define the evaluation criteria, such as minimum eye height/width
  4. Execute and analyze results

View Signal Integrity Challenges for PCIe 5.0 OCP Topologies Video >

How to define evaluation criteria?

Bit error rate (BER) is the ultimate gauge of link performance, but an accurate measure of BER is not possible in relatively short, multi-million-bit simulations.

Instead, this analysis suggests the following pass/fail criteria, which consist of two rules:

    1. A link must meet the receiver’s eye height (EH) and eye width (EW) requirements
    2. A link must meet criteria 1 for at least half of Tx Preset settings (≥5 out of 10)
  • Criteria 1 establishes that the there is a viable set of settings, which results in the desired BER. The specific EH and EW required by the receiver is implementation-dependent.
  • Criteria 2 ensures that the link has adequate margin and is not overly sensitive to the Tx Preset setting.

View Signal Integrity Challenges for PCIe 5.0 OCP Topologies Video >

How to execute and analyze results?

Use IBIS model and time domain simulations.

PCIe® FAQ

What are the most widely used PCIe interconnect scenarios in data centers?
  1. Within a Server: CPU to GPU, CPU to Network Interface Card (NIC), CPU to Accelerator, CPU to SSD
  2. Within a Rack: CPU to JBOG and JBOF through board-to-board connector or cable
  3. Emerging GPUs-to-GPUs or Accelerators-to-Accelerators interconnects

PCIe 5.0 Architecture Channel Insertion Loss Budget >

What are the challenges of PCIe 5.0 designs?

As the demand for artificial intelligence and machine learning grows, new system topologies based on PCIe 5.0 technology will be needed to deliver the required increases to data performance.

While the transition from PCIe 4.0 architecture to PCIe 5.0 architecture increases the channel insertion loss (IL) budget from 28 dB to 36 dB, there will be new design challenges around the higher losses at higher data rates. In the case of other standards greater than 30 GT/s, the PAM-4 modulation method is usually used to make the signal’s Nyquist frequency one-quarter of the data rate, at the cost of 9.5 dB signal-to-noise ratio (SNR).

However, PCIe 5.0 continues to use the non-return-to-zero (NRZ) signaling scheme, thus the Nyquist frequency of the signal is one-half of the data rate, which is 16 GHz. The higher the frequency, the greater the attenuation. The signal attenuation caused by the channel IL is the biggest challenge of PCIe 5.0 system design.

PCIe 5.0 Architecture Channel Insertion Loss Budget >

What are the new specifications in PCIe 5.0?
  1. CTLE & DFE: PCIe 5.0 specifies the bump-to-bump IL budget as 36 dB for 32 GT/s, and the bit error rate (BER) must be less than 10-12. To address the problem of high attenuation to the signal, the PCIe 5.0 standard defines the reference receiver such that the continuous-time linear equalizer (CTLE) model includes an ADC (adjustable DC gain) as low as -15 dB, whereas the reference receiver for 16 GT/s is only -12 dB. The reference decision feedback equalizer (DFE) model includes three taps for 32 GT/s and only two taps for 16 GT/s.
  2. Precoding: Due to the significant role of the DFE circuit plays in the receiver’s overall equalization, burst errors are more likely to occur compared to 16 GT/s. To counteract this risk, PCIe 5.0 introduces Precoding in the protocol. After enabling precoding at the transmitter side and decoding at the receiver side, the chance of burst errors is greatly reduced, thereby enhancing the robustness of the PCIe 5.0 32 GT/s Link.
How much system board budget do I have for PCIe 5.0?

16 dB, but the channel imperfections caused by vias, stubs, AC coupling capacitors and pads, and trace variation further reduce this budget.

View PCIe 5.0 Architecture Channel Insertion Loss Budget Video >

How can I solve Signal Integrity problems in PCIe 5.0?

By leveraging advanced PCB materials and/or PCIe 5.0 Retimers to ensure sufficient end-to-end design margin, system designers can ensure a smooth upgrade to PCIe 5.0 architecture.

View PCIe Webinar for More Details >

How will PCIe 6.0 differ from PCIe 5.0?

PCIe 6.0 will adopt PAM4 signaling instead of NRZ used in previous generations to achieve 64GT/s. However, it will remain fully backwards compatible with PCIe 1.0 through PCIe 5.0. Please see our industry news sections for more resources on PCIe 6.0.

How to define a simulation space?

The main independent variable in PCIe Link simulations is Transmitter Preset—pre-defined combinations of pre-shoot and de-emphasis, and 10 such Presets are defined in the PCIe specification.

View Signal Integrity Challenges for PCIe 5.0 OCP Topologies Video >

What are some design considerations for base board budget?
  1. As the PCB temperature rises, the insertion loss (IL) of the PCB trace becomes higher
  2. Process fluctuation during PCB manufacturing can result in slightly narrower or wider line widths, which can lead to fluctuations in IL
  3. The amplitude of the Nyquist frequency signal (16-GHz sine wave in the case of 32 GT/s NRZ signaling) at the source side is 800 mV pk-pk, which will reduce to about 12.7 mV after 36 dB of attenuation. This underscores the need to leave some IL margin for the receiver to account for reflections, crosstalk, and power supply noise that all potentially will degrade the SNR.

Thus, the IL budget reserved for the PCB trace on the system base board should be 16 dB minus some amount of margin, which is reserved for the above factors. Many hardware engineers and system designers tend to leave 10-20% of the overall channel IL budget as margin for such factors. In the case of a 36-dB budget, this amounts to 4-7 dB.

PCIe 5.0 Architecture Channel Insertion Loss Budget >

How does the loss budget translate into reach extension?

In an add-in-card topology, merely 16 dB system board budget remains, equivalent to ~8 inch trace length, when adding safety margin for board loss variations due to temperature and humidity, even if upgrading to a ultra-low-loss PCB material. Upgrading to expensive “Ultra-low-loss” material will enable ~8 inches. However, the reach requirements can easily exceed ~8 inch in complex topologies.

Why does PCIe 5.0 architecture not support an embedded clock?

PCIe 5.0 architecture, like PCIe 4.0 and 3.0 architectures, supports two clock architectures:

  • Common REFCLK (CC): The same 100-MHz reference clock source is distributed to all components in the PCIe link — Root Complex, Retimer, and Endpoint. Due to REFCLK distribution via PCB routing, fanout buffers, cables, etc., the phase of the REFCLK will be different for all components.
  • Independent REFCLK (IR): Both the Root Complex and End Point use independent reference clocks and the Tx and Rx must meet stringent specifications operating in IR mode compared to the specifications under CC mode. The PCIe Base specification does not specify the properties of independent reference clocks.
How is Burst Error Reporting considered in the PCIe 5.0 specification?

Burst errors are not reported any differently than regular correctable/uncorrectable errors. In fact, burst errors may cause silent data corruption, meaning multiple bits in error can lead to an undetected error event. Therefore, it is incumbent on system designers and PCIe component providers to consciously enable precoding if there is a concern or risk of bust errors in a system.

Is there a standard host and root complex channel sNp model published by PCI-SIG?

PCI-SIG does not publish official or “standard” channel models; however, the Electrical Workgroup (EWG) does post example channel models. For PCIe 5.0 specification, the reference package models are posted here: https://members.pcisig.com/wg/PCIe-Electrical/document/folder/885.

You can also find example pad-to-pad channel models shared by a few member companies during the specification development by searching *.s24p in the following folder https://members.pcisig.com/wg/PCIe-Electrical/document.

Does PCI-SIG provide a tool for interoperability tests?

PCI-SIG defines the specifications, but not a tool for the purpose of interoperability testing. ASIC vendors and OEMs/ODMs generally provide/have these tools, for the purpose of testing and stressing the PCIe link, to make sure there are no interoperability issues.  

Other than an add-in-card (CEM connector), are other connectors like M.2 supported in the PCIe 5.0 interface?

There are multiple connector types and form factors in development, which are targeting PCIe 5.0 signal speeds, including: M.2, U.2, U.3, mezzanine connectors, and more.

What ultra-low-loss PCB material do you recommend for PCIe 5.0 technology?

There is no industry-standard definition of mid-loss, low-loss, and ultra-low-loss. It is good practice to start from the loss budget analysis to select which type of PCB material is needed for the system. Megtron-6 or other types of PCB material with similar performance as that of Megtron-6 are commonly used in PCIe 5.0 server systems where the distance from Root Complex pin to CEM connector exceeds 10″.

Have there been changes to the CEM add-in card for RX compliance testing?

Test methodology is similar to that of CEM 4.0. See details from the PCIe 5.0 PHY Test Spec v0.5.

Is there a difference in system-level TX/RX compliance testing with a Retimer in the system compared to without?

No, there is no difference.

Is NEXT/FEXT going to be a required or optional test?

At this moment, these are not specified in the PCIe 5.0 PHY Test Spec v0.5.

Is RX Lane Margin a must for PCIe 5.0 specification compliance?

The Lane Margin Test (LMT) is defined in PCIe 5.0 PHY Test Spec v0.5, and RX Lane Margining in time and voltage is required for all PCIe 5.0 receivers. However, according to the test specification, LMT checks whether the add-in card under test implements the lane margining capability. The margin values reported are not checked against any pre-defined pass/fail criteria.

What is the scope bandwidth for PCIe 5.0 TX testing?

33 GHz for the PCIe 5.0 TX test. See more from PCIe 5.0 PHY Test Spec v0.5.

Do you suggest that vendors implement the LTSSM test, or is it OK to just pass TX compliance and RX JBERT tests?

Passing TX compliance and RX BER test does not guarantee system-level interoperability. It is advisable to perform separate tests to exercise the LTSSM, as well as application-specific tests, such as hot unplug/hot plug, to demonstrate system-level robustness.

How do you enable precoding? Is precoding a feature specific to PCIe 5.0 specification?

The enabling/disabling or Precoding is negotiated during link training. Whether Precoding is needed or not is largely dependent on the specific receiver implementation. As an example, receivers that rely heavily on DFE tap-1 may choose to request Precoding during link training. So, each receiver will make its own determination, based on the receiver architecture, as to whether it should request Precoding or not. Precoding is defined in the PCIe 5.0 specification but not in the PCIe 4.0 specification.

Does precoding impact performance?

The PCIe 5.0 specification introduces selectable Precoding. Precoding breaks an error burst into two errors: an entry error and an exit error. However, a random single-bit error would also be converted to two errors, and therefore a net 1E-12 BER with precoding disabled would effectively become 2E-12 BER with precoding enabled.

What is PAM4 and why will PCIe 6.0 use it?

PAM4 stands for Pulse Amplitude Modulation Level 4, and is a type of signaling that caries 2 bits (00, 01, 10, or 11) at a time instead of 1 bit (0 or 1) used in previous PCIe generations.

What challenges will arrive with the use of PAM4?

The largest challenge will be handling higher error rates. To address this, the PCIe 6.0 standard will also begin to implement Forward Error Correction (FEC).

CXL™ FAQ

Why is CXL™ important?

CXL™ is needed to overcome CPU-memory and memory-storage bottlenecks faced by computer architects. CXL allows for a new memory fabric supporting various processors (CPUs, GPUs, DPUs) sharing heterogeneous memory. Future data centers need heterogeneous compute, new memory and storage hierarchy, and an agnostic interconnect to tie it all together.

What kind of memory is supported over CXL™?

Traditional DRAM and persistent storage class memory (SCM) are supported, allowing for flexibility between performance and cost.

What is CXL™?

Compute Express Link™ (CXL™) is an open interface that standardizes a high performance interconnect for data-centric platforms involving various XPUs. It provides a uniform means of connection to CPUs, GPUs, FPGAs, storage, memory, and networking.

What are the different devices that the CXL™ protocol supports?

The CXL™ protocol supports three different type of devices:

  • Type 1 Caching Devices / Accelerators
  • Type 2 Accelerators with Memory
  • Type 3 Memory Buffer
What are some CXL™ based applications?
  • Memory tiering in which additional capacity is applied with a variable mix of lower-latency direct-attached memory and higher-latency large capacity memory
  • Higher VM density per system by having more memory capacity attached
  • Large databases can use a caching layer provided by SCM to improve the performance
What are the 3 CXL™ protocols?
  • CXL.io is used for initialization, link-up, device discovery and enumeration, and register access. It provides a non-coherent load/store interface for I/O devices similar to PCIe® 5.0.
  • CXL.cache defines interactions between a Host and Device, which allows CXL devices to cache host memory with low latency.
  • CXL.mem provides a Host processor with direct access to Device-attached memory using load/store commands.
What kind of electrical signals support CXL™?

CXL™ runs on PCIe® 5.0 electrical signals. CXL runs on PCIe PHY and supports x16, x8, and x4 link widths natively.

What features does CXL™ 2.0 add?

CXL™ 2.0 adds support for switching, persistent memory, and security as well as memory pooling support to maximize memory utilization, reducing or eliminating the need to over-provision memory.

Quality FAQ

What is the process for Failure Analysis (FA)?
  1. If you need to return potentially defective material, please contact Astera Labs’s Customer Service organization.
  2. The Quality team will run an evaluation based upon customer-generated diagnostic logs, production test results, and PCIe system testing, and will share the results using an 8D process.
Where can I get information on FIT analysis and other quality data?

All device qualification data, including FIT calculation, is included in the qualification summary document. Contact us or ask your Astera Labs Sales Manager for further information.

How does Astera stand behind the quality of their product?

Our goal is to provide consumers with the highest quality products by assuring their performance, consistency and reliability.

Our team values are integral to who we are and how we operate as a company.

Who are Astera Labs' primary design & manufacturing partners? 

We leverage industry leading partners in both our design and manufacturing process such as (but not limited to): TSMC, AWS, Intel, Synopsys. These partners help ensure our mission to provide high-quality, top-performing solutions.

What is the business continuity/contingency plan in case of catastrophic events?

To ensure a consistent supply to meet our customer’s high volume demands, Astera Labs implements in multi-vendor and multi-site manufacturing. This approach gives us a strong business continuity/contingency plan in case of catastrophic events (e.g., earthquake, tsunami, flood, fire, etc.) to ensure or recover supply quickly.

Ordering FAQ

Where can I order parts?

Customers can order directly from Astera Labs, or can order from one of our franchised partners, which currently include Mouser, EDOM, Eastronics, and Intron.

What does Astera Labs manufacture?

Purpose-built Retimer IC’s, Riser Cards, Extender Cards, and Booster Cards for High-performance Server, Storage, Cloud, and Workload-Optimized Systems

Who are Astera Labs' primary design & manufacturing partners? 

We leverage industry leading partners in both our design and manufacturing process such as (but not limited to): TSMC, AWS, Intel, Synopsys. These partners help ensure our mission to provide high-quality, top-performing solutions.

What is the business continuity/contingency plan in case of catastrophic events?

To ensure a consistent supply to meet our customer’s high volume demands, Astera Labs implements in multi-vendor and multi-site manufacturing. This approach gives us a strong business continuity/contingency plan in case of catastrophic events (e.g., earthquake, tsunami, flood, fire, etc.) to ensure or recover supply quickly.

Are there standard Terms and Conditions from Astera Labs for orders?

Please review the Astera Labs Terms of Sale.

Have more questions about Astera Labs products or Technology? Get in touch with an Astera Labs expert.

Video Center
All
  • All
  • Astera Labs
  • Cloud-Scale Interop
  • CXL Memory Accelerator
  • CXL Technology
  • Smart Cable Modules
  • Smart Retimers
Deploy Robust PCIe 5.0 Connectivity wit hAries Smart Retimers
08 Mar
Deploy Robust PCIe® 5.0 Connectivity with Aries Smart Retimers

See our Aries Smart Retimers in action via two interoperability demonstrations with key industry partners’ PCIe® 5.0 root complex and endpoints.

astera-labs-intel-capital
30 Dec
Astera Labs & Intel Capital

Astera Labs Co-Founders Sanjay Gajendra, Jitendra Mohan, and Casey Morrison highlight the company’s enduring partnership with Intel Capital.

Taurus-Smart-Cable-Module-Demo
10 Nov
Taurus Smart Cable Module™ 400GbE PAM4 Connectivity Demonstration

See a Taurus Smart Cable Module™ enabled 400GbE PAM4 Smart Electrical Cable in action with a demonstration of an end-to-end 400GbE link up passing error-free traffic as well as real-time link diagnostics.

Unlock-the-Full-Potential-of-CXL
10 Nov
Unlock the Full Potential of CXL™ with Purpose-Built Connectivity Solutions from Astera Labs

Meet the Leo CXL™ Memory Accelerator Platform and Aries CXL Smart Retimers – Astera Labs’ portfolio of solutions that unlock the full potential of data-centric systems based on Compute Express Link™ technology.

Intel-Innovation-2021
03 Nov
Intel Innovation 2021: Astera Labs, Broadcom, Intel & Samsung PCI Express® 5.0 Demo

Astera Labs joined Broadcom, Intel, and Samsung at Intel Innovation 2021 to demonstrate seamless end-to-end PCI Express® (PCIe®) 5.0 interoperation at 32GT/s.

aries-CXL-ecosystem-interop_
03 Nov
Aries CXL™ Smart Retimer Demo: CXL Ecosystem Interop with Intel and Synopsys

Industry’s first demonstration of a fully formed CXL™ link between an Intel root complex, an Aries CXL Smart Retimer and Synopsys end point IP.

  • 1
  • 2
  • 3
  • …
  • 5
  • Next »
Load More loader
Webinars

Seamless Transition to PCIe® 5.0

Explore the changes between PCIe® 4.0 and PCIe 5.0 specifications, including signal integrity and system design challenges, where the right balance must be found for practical compute topologies.

Register for On-Demand Webinar

PCIe® Retimers to the Rescue Webinar

Discover solutions that address signal-integrity and channel insertion loss challenges to ensure the full potential of the increased bandwidth offered by PCIe® 4.0 and PCIe 5.0 are achieved.

Register for On-Demand Webinar

Signal Integrity Challenges for PCIe® 5.0 OCP Topologies

Discover how to strike the right balance between PCB materials, connector types, and the use of signal conditioning devices for practical compute topologies.

View On-Demand Webinar
  • Aries PCIe/CXL Smart Retimers
  • Taurus Ethernet Smart Cable Modules
  • Leo CXL Memory Accelerators
  • Cloud-Scale Interop Lab
  • Applications
  • Quality
  • Technology Insights
  • Contact Us
  • Careers
  • News & Articles
  • Support Portal    
Subscribe for Updates
Please enter your name.
Please enter a valid email address.
Subscribe

Thanks for subscribing! 

Something went wrong. Please check your entries and try again.

By submitting this form, you are consenting to receive emails from Astera Labs. You can revoke your consent at any time by using the Unsubscribe link found at the bottom of every email.

Astera Labs

Copyright © 2022 Astera Labs, Inc. All rights reserved.

Site Map I Privacy Policy I Terms of Use | Terms of Sale