Learn how our comprehensive interoperability testing reduces design challenges, so you can accelerate time-to-market, streamline development efforts and reduce costs for designing and deploying heterogeneous infrastructure with CXL technology.
Astera Labs Video Library
Learn how our comprehensive interoperability testing reduces design challenges, so you can accelerate time-to-market, streamline development efforts and reduce costs for designing and deploying heterogeneous infrastructure with CXL technology.
After establishing the foundation of our rigorous testing, we worked with our customers to determine the most popular memory configurations for their systems and applications, to which we’ve included in our initial interop reports. We include 64GB DDR5-4800 RDIMMs from Micron, Samsung, and SK Hynix, each of which are tested with CXL 1.1-capable CPUs from AMD and Intel.
The increasing volume of data and complexity of models requires advancements in cloud server architecture to remove memory bottlenecks and unlock the performance for compute-intensive workloads, such as Artificial Intelligence and Machine Learning. Leo Memory Connectivity Platform for CXL 1.1 and 2.0 eliminates the memory bottlenecks inherent in today’s architectures and enables new heterogeneous infrastructure to increase performance and reduce costs for cloud-scale deployment.
As our appetite for creating and consuming massive amounts of data continues to grow, so too will our need for increased cloud capacity to store and analyze this data. Additionally, the server connectivity backbone for data center infrastructure needs to evolve as complex AI and ML workloads become mainstream in the cloud.
Today’s data center networks are primarily serviced by 25G/lane Ethernet technology; however, these networks are quickly moving to 50G and 100G/lane to allow hyperscalers to add additional servers and switches to their Clos Network topologies and support data-intensive workloads such as AI and Machine Learning. This rapid growth in Ethernet port speed is causing a new set of challenges for design complexity and serviceability of hyperscale architectures.
Interconnect technologies will play an important role in the overall connected car story to meet the needs of mass data transfer within the In-Vehicle Network. We have recently seen these types of challenges and a similar evolution in enterprise data centers, where intelligent systems running data-intensive workloads — such as Artificial Intelligence and Machine Learning — have drastically increased the overall design complexity.