From data center to edge, Arm is enabling full-stack AI efficiency, powering ecosystems with performance-per-watt optimization, tailored silicon, and software portability across environments.
Real-Time Energy Routing (RER) treats electricity like data—modular, dynamic, and software-defined—offering a scalable path to resilient, sustainable data center power.
Intel shares insights on Arm vs. x86 efficiency, energy goals for 2030, AI-driven power demands, and how enterprises can navigate compute efficiency in the AI era.
From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.
AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.
Exploring how Flex is rethinking data center power and cooling – from 97.5% efficient power shelves to liquid cooling and “grid to chip” solutions – with Chris Butler, President of Embedded & Critical Power.
These data-infrastructure shifts will determine which enterprises scale AI in 2026, from real-time context and agentic guardrails to governance, efficiency, and a more resilient data foundation.
From circularity to U.S. assembly, Giga Computing lays out a rack-scale roadmap tuned for the next phase of AI—where inference drives scale and regional supply chains become a competitive edge.
In Part 2 of Matty Bakkeren’s 2026 predictions series, he explores how regulation, sovereignty, and public trust will push data centers to behave more like utilities than tech projects.
Arm’s OCP board seat and new FCSA spec push chiplet interoperability from idea to implementation—enabling mix-and-match silicon and smarter storage so teams can build AI without hyperscaler budgets.
From WEKA’s memory grid and exabyte storage to 800G fabrics, liquid-cooled AI factories, edge clusters, and emerging quantum accelerators, SC25 proved HPC is now about end-to-end AI infrastructure.
Xeon 6 marries P-cores, E-cores, and scalable memory to feed data-hungry HPC workloads, eliminating bandwidth bottlenecks so spectral sims and other memory-bound codes can finally scale.
TechArena host Allyson Klein chats with VectorZero CTO and co-founder Sean Grimaldi about his extensive experience fighting bad actors to protect the nation’s security and how his learning at the CIA has informed his approach to elimination of attack vectors at VectorZero.
TechArena host Allyson Klein chats with cPacket Networks CTO Ron Nevo about how observability has evolved for a multi-cloud to edge era and why his company is delivering four key components of observability solutions to customers.
TechArena host Allyson Klein chats with AMD's Lynn Comp about the changing landscape of CPU design and how AMD is poised to lead future innovation.
TechArena host Allyson Klein chat’s with OCP’s VP of Market Intelligence and Innovation, Cliff Grossner, about the challenges facing data center innovation and how OCP has transformed to drive deeper partnerships with the industry and broaden its impact on data center innovation.
TechArena host Allyson Klein chats with OCP Chair and CloudFlare VP Rebecca Weekly about the future of open computing solutions, how regional demands drive the OCP mission, and the importance of sustainability.
TechArena Host Allyson Klein chats with Open Compute Foundation leaders Michael Schill and Steve Helvie about the organization’s rising contributions and what it means for broad adoption of open hardware configurations from edge to cloud.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.