Real-Time Energy Routing (RER) treats electricity like data—modular, dynamic, and software-defined—offering a scalable path to resilient, sustainable data center power.
Intel shares insights on Arm vs. x86 efficiency, energy goals for 2030, AI-driven power demands, and how enterprises can navigate compute efficiency in the AI era.
From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.
AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.
Exploring how Flex is rethinking data center power and cooling – from 97.5% efficient power shelves to liquid cooling and “grid to chip” solutions – with Chris Butler, President of Embedded & Critical Power.
Vanguard lead data engineer and IEEE senior member Vivek Venkatesan shares how a pivotal Botswana project shaped his path, how trust turns innovation into impact , and much more.
As AI drives explosive data growth, next-gen SSDs deliver the speed, density, and efficiency to outpace HDDs—reshaping storage strategy for tomorrow’s data-centric data centers.
The Common Vulnerabilities and Exposures (CVE) landscape is shifting—governance is changing, and security pros are moving beyond raw CVE counts to focus on context-aware, risk-based vulnerability management.
Quantum breakthroughs from Microsoft, Quantinuum, and Google signal accelerating progress—but are we nearing a tipping point or still deep in the hype cycle?
OVHcloud’s infrastructure strategy includes value chain integration, water-cooled data centers and servers with high-performance storage capabilities, setting it apart in the cloud industry.
From breakthrough 122TB SSDs to the industry’s first liquid-cooled storage, Solidigm’s Avi Shetty unpacks how storage is powering AI workloads from hyperscale to neo-cloud.
From GPU and storage servers to turnkey rack-scale solutions, Giga Computing showcases its expanding OCP portfolio and the evolution of Giga PODs for high-density, high-efficiency data centers.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.