From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.
AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.
Exploring how Flex is rethinking data center power and cooling – from 97.5% efficient power shelves to liquid cooling and “grid to chip” solutions – with Chris Butler, President of Embedded & Critical Power.
Vanguard lead data engineer and IEEE senior member Vivek Venkatesan shares how a pivotal Botswana project shaped his path, how trust turns innovation into impact , and much more.
In this 5 Fast Facts on Compute Efficiency Q&A, CoolIT’s Ben Sutton unpacks how direct liquid cooling (DLC) drives PUE toward ~1.02, unlocks higher rack density, and where it beats immersion on cost and deployment.
From Wi-Fi to AI, Mark Grodzinsky shares how chance turns, mentors, and market inflection points shaped his career – and why true innovation is about impact, not hype.
Arm’s OCP board seat and new FCSA spec push chiplet interoperability from idea to implementation—enabling mix-and-match silicon and smarter storage so teams can build AI without hyperscaler budgets.
From WEKA’s memory grid and exabyte storage to 800G fabrics, liquid-cooled AI factories, edge clusters, and emerging quantum accelerators, SC25 proved HPC is now about end-to-end AI infrastructure.
Xeon 6 marries P-cores, E-cores, and scalable memory to feed data-hungry HPC workloads, eliminating bandwidth bottlenecks so spectral sims and other memory-bound codes can finally scale.
On Day 1 of KubeCon + CloudNativeCon Atlanta, CNCF unveiled Kubernetes AI Conformance to make workloads portable—arriving as inference surges to ~1.33 quadrillion tokens/month across Google’s systems.
Multi-year agreement makes VAST AI OS CoreWeave’s primary data foundation, aligning roadmaps for instant dataset access and performance across training and real-time inference in global AI cloud regions.
At the OCP Global Summit, Avayla CEO Kelley Mullick reveals how rapid standardization and hybrid cooling strategies are reshaping infrastructure for the AI era.
TechArena host Allyson Klein chats with Microsoft’s Vice President of Azure AI and HPC Infrastructure, Nidhi Chappell, in advance of Microsoft Build 2024. Nidhi shares how her organization is accelerating deployments of critical technology to fuel the insatiable demand for AI around the world and how Microsoft’s AI tools including co-pilot, Open AI and more have been met with overwhelming engagement from developers. She also talks about Microsoft’s silicon plans and strategic collaborations with NVIDIA and AMD.
TechArena host Allyson Klein chats with Research Institute of Sweden’s Jon Summers about the latest research his team has conducted on efficient infrastructure and data center buildout in the wake of massive data center growth for the AI era.
TechArena host Allyson Klein chats with Palo Alto Electron CEO Jawad Nasrullah about his vision for an open chiplet economy, the semiconductor manufacturing hurdles standing in the way of broad chiplet market delivery, and how he plans to play a role in shaping this next evolution of the semiconductor landscape.
TechArena host Allyson Klein chats with OCP’s Raul Alvarez on his new charter accelerating growth of the data center market in Europe as well as his ongoing work in immersion cooling technologies from OCP Lisbon 2024.
TechArena host Allyson Klein and Solidigm’s Jeniece Wnorowski chat with Weka’s Joel Kaufman, as he tours the Weka data platform and how the company’s innovation provides sustainable data management that scales for the AI era.
TechArena host Allyson Klein chats with PLVision Director of Open Networking Solutions and Strategy, Taras Chornyi, about the progress of SONIC and open network infrastructure for the AI Era.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.