From data center to edge, Arm is enabling full-stack AI efficiency, powering ecosystems with performance-per-watt optimization, tailored silicon, and software portability across environments.
Real-Time Energy Routing (RER) treats electricity like data—modular, dynamic, and software-defined—offering a scalable path to resilient, sustainable data center power.
Intel shares insights on Arm vs. x86 efficiency, energy goals for 2030, AI-driven power demands, and how enterprises can navigate compute efficiency in the AI era.
From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.
AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.
Exploring how Flex is rethinking data center power and cooling – from 97.5% efficient power shelves to liquid cooling and “grid to chip” solutions – with Chris Butler, President of Embedded & Critical Power.
These data-infrastructure shifts will determine which enterprises scale AI in 2026, from real-time context and agentic guardrails to governance, efficiency, and a more resilient data foundation.
From circularity to U.S. assembly, Giga Computing lays out a rack-scale roadmap tuned for the next phase of AI—where inference drives scale and regional supply chains become a competitive edge.
In Part 2 of Matty Bakkeren’s 2026 predictions series, he explores how regulation, sovereignty, and public trust will push data centers to behave more like utilities than tech projects.
Arm’s OCP board seat and new FCSA spec push chiplet interoperability from idea to implementation—enabling mix-and-match silicon and smarter storage so teams can build AI without hyperscaler budgets.
From WEKA’s memory grid and exabyte storage to 800G fabrics, liquid-cooled AI factories, edge clusters, and emerging quantum accelerators, SC25 proved HPC is now about end-to-end AI infrastructure.
Xeon 6 marries P-cores, E-cores, and scalable memory to feed data-hungry HPC workloads, eliminating bandwidth bottlenecks so spectral sims and other memory-bound codes can finally scale.
TechArena host Allyson Klein chats with HPE’s Jean-Marie Verdun about his organization’s groundbreaking work to redefine firmware management using OpenBMC technology and how this breakthrough addresses data center customer demands.
TechArena host Allyson Klein chats with CircleB’s Matty Bakkeren about how his organization is leveraging OCP specifications to deliver innovative and sustainable solutions to data center customers, how AI is re-shaping operator requirements, and how he sees the market shaping in 2024.
TechArena hosts Allyson Klein and Jeniece Wronowski chat with CoreWeave’s Jacob Yundt about how his organization is delivering a scalable data pipeline to AI customers utilizing breakthrough VAST Data solutions featuring Solidigm QLC SSDs.
The TechArena kicks off a Data Insights Series in collaboration with Solidigm, and TechArena host welcomes co-host Jeniece Wronowski and Solidigm data center marketing director Ace Stryker to the program to talk about data in the AI era, the series objectives, and how SSD innovation sits at the foundation of a new data pipeline.
TechArena host Allyson Klein chats with OCP’s Cliff Grossner regarding the chiplet era, how architectural evolution has evolved to a chiplet economy, and how the OCP is helping create an open marketplace for innovation.
TechArena host Allyson Klein chats with Voltron Data CEO Josh Patterson about delivery of Theseus, a composable data system framework that unleashes develops with new interoperability and flexibility for AI era data challenges.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.