Posts

Videos

Sorry, we couldn't find any related videos.

Articles & Insights

Cloud Infrastructure Fleet Management Made Easy With COSMOS

Large server deployments for Artificial Intelligence (AI) and general-purpose computing in hyperscale data centers provide enormous benefits in terms of raw compute power, efficiency, and cost amortization. The on-demand nature and low up-front cost of cloud computing is attractive to an increasing number of enterprises. However, managing such a large fleet of systems presents complex… Read More »

Astera Labs’ Flexible CXL Product Suite Enables Low-Latency Memory Expansion

Artificial intelligence (AI) is the single most transformative technology impacting everyday lives. Data-intensive AI applications as well as in-memory databases, high performance computing (HPC) and high-performance file systems are driving the need for faster interconnects between CPUs, GPUs, TPUs, DPUs, SmartNICs and FPGAs. Low latency is also critical, especially for memory interconnects. Compute Express Link™… Read More »

Memory Wall 3
Breaking Through the Memory Wall

The term “memory wall” was first coined in 1994 to define what was becoming an obvious problem at the time: processor performance was outpacing memory interconnect bandwidth. In other words, memory access was limiting compute performance. Almost 30 years later this statement still holds true, especially in memory-intensive applications such as artificial intelligence (AI) where… Read More »

Astera Labs Delivers Industry-First CXL Interop with DDR5-5600 Memory Modules

Earlier this year, we announced the launch of our Cloud-Scale Interop Lab for CXL to provide robust interoperability testing between our Leo Memory Connectivity Platform and a growing ecosystem of CXL supported CPUs, memory modules and operating systems. By providing this critical testing, we enable customers to deploy CXL-attached memory with confidence by minimizing interoperational… Read More »