Astera Labs at OCP APAC Summit: Advancing Open AI Infrastructure 2.0 Through Rack-Scale Connectivity

Paroma Sen, VP, Corporate Marketing

As AI training clusters scale to 200,000+ GPUs, traditional server architectures require a fundamental paradigm shift to handle this unprecedented scale. Join Astera Labs at the OCP APAC Summit, August 5-6 in Taipei, as we put a spotlight on the transition to AI Infrastructure 2.0—where the rack is replacing the server as the new unit of compute.

This transformation isn’t just evolutionary—it’s essential. The relentless pursuit of AI model performance has fundamentally changed the infrastructure equation, as modern AI workloads demand such tight coupling and low latency communication between hundreds of accelerators that entire racks must function as unified computing platforms rather than collections of individual servers. Traditional server-centric architectures have hit a wall, and the fastest path to rack-scale transformation lies in purpose-built solutions developed within open ecosystems. When companies collaborate on common standards, innovation happens in parallel rather than in isolation, enabling the fundamental shift towards larger, faster GPU pods connected by open interconnects that treat the rack as the fundamental unit of compute.

For architects designing these next-generation systems, our OCP APAC sessions provide actionable intelligence for navigating this transition. The complexity of AI Infrastructure 2.0 requires support for multiple interconnect protocols—UALink™ for scale-up, Ethernet for scale-out, PCIe® for peripherals, and CXL® for memory—each optimized for specific use cases within the unified rack architecture. Our presentations bring together speakers from across Astera Labs’ deep technical talent pool, spanning scale-up fabrics, ecosystem partnerships, and product engineering, to share insights on building efficient, collaborative AI infrastructure through open standards.

Hear From Our Experts

  • What Matters for Scale-up Fabric
    • Speaker: Sharada Yeluri, Associate VP Engineering, Scale-up Fabrics
    • Tuesday, August 5: 4:25pm-5:00pm  
    • This session will examine the critical requirements for rack-scale interconnect fabrics as AI workloads scale to thousands of accelerators. We’ll explore UALink’s open protocol stack and switch fabric architecture, contributing to the community’s understanding of how open standards can meet next-generation connectivity demands through collaborative ecosystem development.
  • Scaling the Next Wave of Servers with PCIe 6
    • Speaker:Chris Petersen, Fellow, Technology & Ecosystems
    • Wednesday, August 6: 9:30am-9:50am  
    • As AI Infrastructure 2.0 drives the need for modular, rack-scale server designs, this presentation will explore how PCIe 6 and CXL 3.x foundations enable more efficient and flexible system architectures. Through practical use cases and system design examples, attendees will gain insights into building scalable AI infrastructure that aligns with open compute principles.
  • Role of Ethernet in Next-Generation AI System Architectures
    • Speaker: Susmita Joshi, Product Line Manager   
    • Wednesday, August 6: 10:45am-11:00am
    • This session addresses the evolution of high-speed protocols in AI Infrastructure 2.0 environments, examining how PCIe, CXL, UALink, Ethernet, and Ultra Ethernet complement or compete in modern architectures. We’ll explore the Scale-Up and Scale-Out paradigms and Ethernet’s problem-solving potential, contributing to community discussions on optimal protocol selection for cloud service architectures.
  • Mixing PCIe 5 and PCIe 6 in AI Platforms: Benefits and Challenges  
    • Speaker: Caleb Shetland, Director, Product Applications Engineering  
    • Wednesday, August 6: 1:30pm-1:45pm
    • As AI Infrastructure 2.0 bandwidth demands accelerate PCIe 6 adoption while many components remain on PCIe 5, this presentation will examine the complexities of mixed-generation topologies. We’ll share insights on addressing speed mismatches, PCIe 6-specific features, and design considerations for OCP-compliant, disaggregated system architectures that meet AI performance requirements.

  • Panel: Scale-up Ecosystem Discussion
    • Moderator: Chris Petersen, Fellow, Technology & Ecosystems
    • Wednesday, August 6: 2:00pm-2:30pm
    • Join ecosystem partners AMD, Broadcom, and Panmnesia for a collaborative discussion on scale-up infrastructure challenges and opportunities, fostering cross-industry dialogue on open solutions.
  • Architecting the Future: Server Design in the Age of AI
    • Speaker: Chris Petersen, Fellow, Technology & Ecosystems
    • Wednesday, August 6: 3:00pm-4:00pm
    • This panel brings together industry leaders to explore how server architectures are evolving to meet AI’s unprecedented demands. From chiplet-based designs and high-speed interconnects to modular standards and open ecosystems, the discussion will highlight key design trade-offs and the critical role of collaboration in accelerating AI Infrastructure 2.0 deployment across the community.

Let’s connect in Taipei and drive the future of open AI infrastructure together. Register today and then email to schedule a meeting with our connectivity experts at the show.