Allyson Klein predicts inference spreading from cloud to edge, agentic oversight reshaping ops, privacy battles intensifying, scientific computing facing brain drain, and quantum finally breaking through.
RackRenew remanufactures OCP compliant-infrastructure into certified, warranty-backed assemblies—standardized, tested, and ready to deploy for faster capacity without bespoke engineering.
These data-infrastructure shifts will determine which enterprises scale AI in 2026, from real-time context and agentic guardrails to governance, efficiency, and a more resilient data foundation.
From circularity to U.S. assembly, Giga Computing lays out a rack-scale roadmap tuned for the next phase of AI—where inference drives scale and regional supply chains become a competitive edge.
In Part 2 of Matty Bakkeren’s 2026 predictions series, he explores how regulation, sovereignty, and public trust will push data centers to behave more like utilities than tech projects.
Arm’s OCP board seat and new FCSA spec push chiplet interoperability from idea to implementation—enabling mix-and-match silicon and smarter storage so teams can build AI without hyperscaler budgets.
What modern storage really means, how on-prem arrays compare to first-party cloud services, and a clear checklist to pick the right fit for cost, control, scalability, and resilience.
As part of Flex, JetCool is scaling its microconvective cooling technology to help hyperscalers deploy next-gen systems faster, streamlining cooling deployments from server to rack in the AI era.
Ventiva discusses how hard-won laptop cooling know-how can unlock inside-the-box gains for AI servers and racks—stabilizing hotspots, preserving acoustics, and boosting performance.
At GTC DC, NVIDIA outlined DOE-scale AI systems, debuted NVQLink to couple GPUs and quantum, partnered with Nokia on AI-RAN to 6G, mapped Uber robotaxis for 2027, and highlighted Synopsys’ GPU gains.
Design shifted to rack-scale. Power and cooling span the full path. Liquid is table stakes. Three takeaways from OCP 2025—and why CelLink’s PowerPlane fits an AI-factory mindset.
Traditional data protection becomes the bottleneck when GPU idle time costs millions. Joint testing with Solidigm shows how next-generation solutions maintain full speed during drive failures.
Allyson chats with Vast Data co-founder and CMO Jeff Denworth about Universal Storage and why it aims to disrupt traditional data paradigms.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.