Real-Time Energy Routing (RER) treats electricity like data—modular, dynamic, and software-defined—offering a scalable path to resilient, sustainable data center power.
Intel shares insights on Arm vs. x86 efficiency, energy goals for 2030, AI-driven power demands, and how enterprises can navigate compute efficiency in the AI era.
From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.
AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.
Exploring how Flex is rethinking data center power and cooling – from 97.5% efficient power shelves to liquid cooling and “grid to chip” solutions – with Chris Butler, President of Embedded & Critical Power.
Vanguard lead data engineer and IEEE senior member Vivek Venkatesan shares how a pivotal Botswana project shaped his path, how trust turns innovation into impact , and much more.
As AI drives explosive data growth, next-gen SSDs deliver the speed, density, and efficiency to outpace HDDs—reshaping storage strategy for tomorrow’s data-centric data centers.
The Common Vulnerabilities and Exposures (CVE) landscape is shifting—governance is changing, and security pros are moving beyond raw CVE counts to focus on context-aware, risk-based vulnerability management.
Quantum breakthroughs from Microsoft, Quantinuum, and Google signal accelerating progress—but are we nearing a tipping point or still deep in the hype cycle?
OVHcloud’s infrastructure strategy includes value chain integration, water-cooled data centers and servers with high-performance storage capabilities, setting it apart in the cloud industry.
From breakthrough 122TB SSDs to the industry’s first liquid-cooled storage, Solidigm’s Avi Shetty unpacks how storage is powering AI workloads from hyperscale to neo-cloud.
From GPU and storage servers to turnkey rack-scale solutions, Giga Computing showcases its expanding OCP portfolio and the evolution of Giga PODs for high-density, high-efficiency data centers.
Innovative power delivery unlocks a shift in data-center design. CelLink PowerPlane routes thousands of amps in a flat, flexible circuit—cutting cabling and accelerating AI factory builds.
Recorded live at OCP in San Jose, Allyson Klein talks with CESQ’s Lesya Dymyd about hybrid quantum-classical computing, the new Maison du Quantique, and how real-world use cases may emerge over the next 5–7 years.
CEO Carl Schlachte joins TechArena at OCP Summit to share how Ventiva’s solid-state cooling—proven in dense laptops—scales to servers, cutting noise, complexity and power while speeding deployment.
From OCP Summit San Jose, Allyson Klein and co-host Jeniece Wnorowski interview Dr. Andrew Chien (UChicago & Argonne) on grid interconnects, rack-scale standards, and how openness speeds innovation.
From the OCP Global Summit in San Jose, Allyson Klein sits down with Chris Butler of Flex to unpack how the company is collapsing the gap between IT and power—literally and figuratively.
From OCP Summit 2025, Kelley Mullick joins Allyson Klein and co-host Jeniece Wnorowski for a Data Insights episode on rack-scale design, hybrid cooling (incl. immersion heat recapture), and open standards.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.