Purpose-Built Connectivity for AI and Cloud Infrastructure
The explosion in training model sizes to support modern Generative AI applications is the impetus for transformational change in data center connectivity. Astera Labs’ PCIe, CXL, and Ethernet connectivity solutions are purpose-built to unleash the full potential of AI and cloud infrastructure.
Astera Labs Announces Pricing of Initial Public Offering
Astera Labs, Inc. (“Astera Labs”) today announced the pricing of its initial public offering of 19,800,000 shares of its common stock at a price to the public of $36.00 per share.
Astera Labs Announces Launch of Initial Public Offering
Santa Clara, CA — March 8, 2024 — Astera Labs, Inc. (“Astera Labs”) today announced the launch of its initial public offering of 17,800,000 shares of its common stock. The offering consists of 14,788,903 shares of common stock offered by Astera Labs and 3,011,097 shares of common stock to be sold by certain of Astera… Read More »
Astera Labs Expands Widely Deployed, Field-Tested Retimer Portfolio with Industry’s Lowest Power PCIe 6.x/CXL 3.x Solution
Astera Labs announces the expansion of its widely deployed, field-tested Aries PCIe/CXL Smart DSP Retimer portfolio, to include a solution that delivers robust, low-power PCIe® 6.x and CXL® 3.x connectivity between next generation GPUs, accelerators, CPUs, NICs, and CXL memory controllers in data-centric systems.
Cloud Infrastructure Fleet Management Made Easy With COSMOS
Large server deployments for Artificial Intelligence (AI) and general-purpose computing in hyperscale data centers provide enormous benefits in terms of raw compute power, efficiency, and cost amortization. The on-demand nature and low up-front cost of cloud computing is attractive to an increasing number of enterprises. However, managing such a large fleet of systems presents complex… Read More »
Astera Labs’ Flexible CXL Product Suite Enables Low-Latency Memory Expansion
Artificial intelligence (AI) is the single most transformative technology impacting everyday lives. Data-intensive AI applications as well as in-memory databases, high performance computing (HPC) and high-performance file systems are driving the need for faster interconnects between CPUs, GPUs, TPUs, DPUs, SmartNICs and FPGAs. Low latency is also critical, especially for memory interconnects. Compute Express Link™… Read More »
Breaking Through the Memory Wall
The term “memory wall” was first coined in 1994 to define what was becoming an obvious problem at the time: processor performance was outpacing memory interconnect bandwidth. In other words, memory access was limiting compute performance. Almost 30 years later this statement still holds true, especially in memory-intensive applications such as artificial intelligence (AI) where… Read More »