Helios puts “rack as product” in market, Intel’s rack-scale vision shows up on the floor, and vendors from Giga Computing to Rack Renew turn open specs into buyable racks, pods—and faster time-to-online.
Open collaboration just leveled up: OCP pushes shared specs from rack to data center—power, cooling, networking, and ops—so AI capacity can scale faster, with less friction and more choice.
Appointment to Open Compute Project Foundation board of directors, contribution of Foundation Chiplet System Architecture (FCSA) spec underscore Arm’s ascendency in hyperscale, AI data centers.
CelLink’s ultrathin flex harnessing ushers in a new era in compute infrastructure innovation, cutting cable volume by up to 90% and boosting density, reliability, and efficiency.
As AI workloads push storage power consumption higher, the path to true storage efficiency demands systems-level thinking including hardware, software, and better metrics for picking the right drives.
Dave Driggers, CEO of Cirrascale, breaks down what “compute efficiency” really means, from GPU utilization and TCO modeling to token-based pricing that drives predictable customer value.
Ventiva discusses how hard-won laptop cooling know-how can unlock inside-the-box gains for AI servers and racks—stabilizing hotspots, preserving acoustics, and boosting performance.
At GTC DC, NVIDIA outlined DOE-scale AI systems, debuted NVQLink to couple GPUs and quantum, partnered with Nokia on AI-RAN to 6G, mapped Uber robotaxis for 2027, and highlighted Synopsys’ GPU gains.
Design shifted to rack-scale. Power and cooling span the full path. Liquid is table stakes. Three takeaways from OCP 2025—and why CelLink’s PowerPlane fits an AI-factory mindset.
Traditional data protection becomes the bottleneck when GPU idle time costs millions. Joint testing with Solidigm shows how next-generation solutions maintain full speed during drive failures.
From provisioning to observability to protection, HPE’s expanding cloud software suite targets the repatriation wave.
LLMs have given attackers new angles. Fortinet showed, step by step, how AI-driven probes escalate—and how FortiGate, FortiWeb, FortiAnalyzer, and FortiSOAR close the door without slowing the business.
TechArena host Allyson Klein chats with Credo VP Don Barnetson about how his company is delivering innovative optical solutions that address the AI era’s requirements for scalable data movement in the data center and beyond from OCP Lisbon 2024.
TechArena host Allyson Klein chats with Open Compute Project VP of Emerging Markets, Steve Helvie, about the proceedings in Lisbon this week and how OCP is helping to shape the cutting edge of infrastructure innovation.
TechArena host Allyson Klein chats with ZeroPoint Technologies’ VP of Business Development Nilesh Shah about the AI era demands for memory innovation, how advanced chiplet architectures will assist semiconductor teams in advancing memory access for balanced system delivery, and how ZeroPoint Technologies plans to play a strategic role in this major market transition.
TechArena host Allyson Klein chats with HPE’s Jean-Marie Verdun about his organization’s groundbreaking work to redefine firmware management using OpenBMC technology and how this breakthrough addresses data center customer demands.
TechArena host Allyson Klein chats with CircleB’s Matty Bakkeren about how his organization is leveraging OCP specifications to deliver innovative and sustainable solutions to data center customers, how AI is re-shaping operator requirements, and how he sees the market shaping in 2024.
TechArena hosts Allyson Klein and Jeniece Wronowski chat with CoreWeave’s Jacob Yundt about how his organization is delivering a scalable data pipeline to AI customers utilizing breakthrough VAST Data solutions featuring Solidigm QLC SSDs.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.
Innovative power delivery unlocks a shift in data-center design. CelLink PowerPlane routes thousands of amps in a flat, flexible circuit—cutting cabling and accelerating AI factory builds.
Recorded live at OCP in San Jose, Allyson Klein talks with CESQ’s Lesya Dymyd about hybrid quantum-classical computing, the new Maison du Quantique, and how real-world use cases may emerge over the next 5–7 years.