

While enterprises pour resources into more GPUs, up to 30% of that computing power sits idle waiting for data. The solution isn't more hardware; it's smarter network architecture.

Design shifted to rack-scale. Power and cooling span the full path. Liquid is table stakes. Three takeaways from OCP 2025—and why CelLink’s PowerPlane fits an AI-factory mindset.

Converging forces, including affordable SSDs, ransomware requiring fast restoration capabilities, and AI workloads needing assured data integrity, are redefining protection strategies at unprecedented scale.

From analytics to AI leadership, TechArena Voice of Innovation Banani Mohapatra (Walmart) shares how experimentation, ethics, and human creativity shape the next era of data-driven innovation.

Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.

The AI surge is forcing a fundamental rethink of infrastructure strategy, from unexpected co-location demand to storage breakthroughs that challenge conventional wisdom.

Traditional data protection becomes the bottleneck when GPU idle time costs millions. Joint testing with Solidigm shows how next-generation solutions maintain full speed during drive failures.

From provisioning to observability to protection, HPE’s expanding cloud software suite targets the repatriation wave.

LLMs have given attackers new angles. Fortinet showed, step by step, how AI-driven probes escalate—and how FortiGate, FortiWeb, FortiAnalyzer, and FortiSOAR close the door without slowing the business.

From racks and liquid loops to data placement and standards pace, five takeaways from CoreWeave, Dell, NVIDIA, Solidigm, and VAST Data on building AI factories that keep accelerators busy and dollars well-spent.

At Cloud Field Day 24, Oxide outlines a vertically integrated rack—custom hypervisor, integrated power/network, and open integrations—aimed at bringing hyperscale efficiency and faster deploys to enterprise DCs.

Presenting at Cloud Field Day 24, Pure pitched fleet-level automation across mixed environments as the antidote to storage silos, promising one control plane for legacy systems and modern workloads.

In the Arena: Allyson Klein with Axelera CMO Alexis Crowell on inference-first AI silicon, a customer-driven SDK, and what recent tapeouts reveal about the roadmap.

Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.

From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.

By rethinking how data flows between storage, memory, and compute, organizations unlock performance improvements impossible through isolated optimization.

A 2025 field guide for architects: why Arm’s software gravity and hyperscaler adoption make it the low-friction path today, where RISC-V is gaining ground, and the curveballs that could reshape both.

In this episode of Data Insights, host Allyson Klein and co-host Jeniece Wnorowski sit down with Dr. Rohith Vangalla of Optum to discuss the future of AI in healthcare.

Industry leaders reveal why data-centric, change-ready data center architectures will determine who thrives in the age of unpredictable AI advancements.

From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.

Innovative power delivery unlocks a shift in data-center design. CelLink PowerPlane routes thousands of amps in a flat, flexible circuit—cutting cabling and accelerating AI factory builds.

Helios puts “rack as product” in market, Intel’s rack-scale vision shows up on the floor, and vendors from Giga Computing to Rack Renew turn open specs into buyable racks, pods—and faster time-to-online.

Recorded live at OCP in San Jose, Allyson Klein talks with CESQ’s Lesya Dymyd about hybrid quantum-classical computing, the new Maison du Quantique, and how real-world use cases may emerge over the next 5–7 years.

CEO Carl Schlachte joins TechArena at OCP Summit to share how Ventiva’s solid-state cooling—proven in dense laptops—scales to servers, cutting noise, complexity and power while speeding deployment.

From OCP Summit San Jose, Allyson Klein and co-host Jeniece Wnorowski interview Dr. Andrew Chien (UChicago & Argonne) on grid interconnects, rack-scale standards, and how openness speeds innovation.

From the OCP Global Summit in San Jose, Allyson Klein sits down with Chris Butler of Flex to unpack how the company is collapsing the gap between IT and power—literally and figuratively.

From OCP Summit 2025, Kelley Mullick joins Allyson Klein and co-host Jeniece Wnorowski for a Data Insights episode on rack-scale design, hybrid cooling (incl. immersion heat recapture), and open standards.

At OCP’s 2025 global summit, Momenthesis founder Matty Bakkeren joins Allyson Klein to explore why open standards and interoperability are vital to sustaining AI innovation at datacenter scale.

Open collaboration just leveled up: OCP pushes shared specs from rack to data center—power, cooling, networking, and ops—so AI capacity can scale faster, with less friction and more choice.

AMI CEO Sanjoy Maity joins In the Arena to unpack the company's shift to open source firmware, OCP contributions, OpenBMC hardening, and the rack-scale future—cooling, power, telemetry, and RAS built for AI.

Appointment to Open Compute Project Foundation board of directors, contribution of Foundation Chiplet System Architecture (FCSA) spec underscore Arm’s ascendency in hyperscale, AI data centers.

Graid Technology takes on Intel VROC licensing for data center and workstation customers, extending its RAID portfolio to offer both CPU-integrated and GPU-accelerated solutions.

As AI spreads across industries, MLPerf is evolving from niche training benchmarks to a shared performance yardstick for storage, automotive, and beyond, capturing a pivotal 2025 moment.

CelLink’s ultrathin flex harnessing ushers in a new era in compute infrastructure innovation, cutting cable volume by up to 90% and boosting density, reliability, and efficiency.

As AI workloads push storage power consumption higher, the path to true storage efficiency demands systems-level thinking including hardware, software, and better metrics for picking the right drives.

As AI workloads scale, cooling must evolve. Iceotope’s liquid cooling technology is a paradigm shift for datacenter and edge infrastructure deployment.

From Citibank to Amazon to AI governance, Bhavnish Walia’s career blends fintech, compliance, and ethical AI. In this Q&A, he shares his innovation framework and vision for augmented creativity.

With explosive data growth and power demands forcing transformation, the future belongs to those who plan for what’s “next-next.”

As GPU racks hit 150kW, throughput per watt has become the efficiency metric that matters, and SSDs are proving their worth over legacy infrastructure with 77% power savings and 90% less rack space.

Voice of Innovation Anusha Nerella shares how fintech, AI, and responsible automation are reshaping the future and why true innovation is less about disruption and more about trust.

A landmark multi-year deal positions AMD as a core compute partner for OpenAI’s expanding AI infrastructure—diversifying its silicon base and reshaping GPU market dynamics.

Rafay Systems is emerging as a key enabler of global AI infrastructure, helping enterprises and neoclouds operationalize large-scale deployments in the dawn of the AI era.

Daniel Wu joins TechArena and Solidigm on Data Insights to share his perspective on bridging academia and enterprise, scaling AI responsibly, and why trustworthy frameworks matter as AI adoption accelerates.

Anusha Nerella joins hosts Allyson Klein and Jeniece Wnorowski to explore responsible AI in financial services, emphasizing compliance, collaboration, and ROI-driven adoption strategies.

Dave Driggers, CEO of Cirrascale, breaks down what “compute efficiency” really means, from GPU utilization and TCO modeling to token-based pricing that drives predictable customer value.

AWS now ships 50% Arm-based compute, and other major cloud providers are following, as efficiency in the gigawatt era and software optimization drive a shift in data center architecture.

At AI Infra Summit, CTO Sean Lie shares how Cerebras is delivering instant inference, scaling cloud and on-prem systems, and pushing reasoning models into the open-source community.

From data center to edge, Arm is enabling full-stack AI efficiency, powering ecosystems with performance-per-watt optimization, tailored silicon, and software portability across environments.

Backed by top U.S. investors, Cerebras gains $1.1B pre-IPO funding, boosting its AI vision, market traction, and challenge to NVIDIA with silicon-to-services expansion.

TechArena Voice of Innovation Tannu Jiwnani explains how to blend GenAI-assisted coding with continuous threat modeling, automated validation, and expert review to accelerate work without compromise.

Real-Time Energy Routing (RER) treats electricity like data—modular, dynamic, and software-defined—offering a scalable path to resilient, sustainable data center power.

Scality CMO Paul Speciale joins Data Insights to discuss the future of storage—AI-driven resilience, the rise of all-flash deployments, and why object storage is becoming central to enterprise strategy.

Intel shares insights on Arm vs. x86 efficiency, energy goals for 2030, AI-driven power demands, and how enterprises can navigate compute efficiency in the AI era.

Ventiva CEO Carl Schlachte joins Allyson Klein to share how the company’s Ionic Cooling Engine is transforming laptops, servers, and beyond with silent, modular airflow.

From racing oils to data center immersion cooling, Valvoline is reimagining thermal management for AI-scale workloads. Learn how they’re driving density, efficiency, and sustainability forward.

This Data Insights episode unpacks how Xinnor’s software-defined RAID for NVMe and Solidigm’s QLC SSDs tackle AI infrastructure challenges—reducing rebuild times, improving reliability, and maximizing GPU efficiency.

From cloud to edge, agentic workflows are moving from pilots to production—reshaping compute, storage, and networks while spotlighting CPU control planes, GPU utilization, and congestion-free fabrics.

Discover how JetCool’s proprietary liquid cooling is solving AI’s toughest heat challenges—keeping data centers efficient as workloads and power densities skyrocket.

In this episode, Allyson Klein, Scott Shadley, and Jeneice Wnorowski (Solidigm) talk with Val Bercovici (WEKA) about aligning hardware and software, scaling AI productivity, and building next-gen data centers.

From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.

From AI Infra Summit, Celestica’s Matt Roman unpacks the shift to hybrid and on-prem AI, why sovereignty/security matter, and how silicon, power, cooling, and racks come together to deliver scalable AI infrastructure.

Allyson Klein talks with Synopsys’ Anand Thiruvengadam on how agentic AI is reshaping chip design to meet extreme performance, time-to-market, and workforce challenges.

From storage to automotive, MLPerf is evolving with industry needs. Hear David Kanter explain how community-driven benchmarking is enabling reliable and scalable AI deployment.

AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.

Solidigm’s Ace Stryker joins Allyson Klein and Jeniece Wnorowski on Data Insights to explore how partnerships and innovation are reshaping storage for the AI era.

Dell outlines how flash-first design, unified namespaces, and validated architectures are reshaping storage into a strategic enabler of enterprise AI success.

With sustainability at the core, Iceotope is pioneering liquid cooling solutions that reduce environmental impact while meeting the demands of AI workloads at scale.
.png)
Exploring how Flex is rethinking data center power and cooling – from 97.5% efficient power shelves to liquid cooling and “grid to chip” solutions – with Chris Butler, President of Embedded & Critical Power.

In this episode of In the Arena, David Glick, SVP at Walmart, shares how one of the world’s largest enterprises is fostering rapid AI innovation and empowering engineers to transform retail.

Vanguard lead data engineer and IEEE senior member Vivek Venkatesan shares how a pivotal Botswana project shaped his path, how trust turns innovation into impact , and much more.

Haseeb Budhani, Co-Founder of Rafay, shares how his team is helping enterprises scale AI infrastructure across the globe, and why he believes we’re still in the early innings of adoption.

In this 5 Fast Facts on Compute Efficiency Q&A, CoolIT’s Ben Sutton unpacks how direct liquid cooling (DLC) drives PUE toward ~1.02, unlocks higher rack density, and where it beats immersion on cost and deployment.

Direct from AI Infra 2025, AI Expert & Author Daniel Wu shares how organizations build trustworthy systems—bridging academia and industry with governance and security for lasting impact.

From Wi-Fi to AI, Mark Grodzinsky shares how chance turns, mentors, and market inflection points shaped his career – and why true innovation is about impact, not hype.

Veteran technologist and TechArena Voice of Innovation Robert Bielby reflects on a career spanning hardware, product strategy, and marketing — and shares candid insights on innovation, AI, and the future of the automotive industry.

Recorded at AI Infra Summit 2025 in Santa Clara: Carrier CDAO Arun Nandi on infra as AI’s backbone, how early adopters win on ROI and speed, and what changed in the last 12–24 months.

The new EcoStruxure™ Pod accelerates data center readiness, reducing complexity, cost, and risk while boosting sustainability and AI workload support.

From anti–money-laundering analytics to leading identity protection, Tannu Jiwnani shares how curiosity, resilience, and inclusive leadership shape responsible innovation and why diversity is security’s superpower.

Allyson Klein hosts Manu Fontaine (Hushmesh) and Jason Rogers (Invary) to unpack TEEs, attestation, and how confidential computing is moving from pilots to real deployments across data center and edge.

Experts from across the compute industry provide their bold views and real-world expertise on AI era computing is driving business and societal opportunity.

Three groundbreaking inference benchmarks debut reasoning models, speech recognition, and ultra-low latency scenarios as 27 organizations deliver record results.

Equinix’s Glenn Dekhayser and Solidigm’s Scott Shadley discuss how power, cooling, and cost considerations are causing enterprises to embrace co-location among their AI infrastructure strategies.

Two decades of action and bold milestones show why Schneider Electric is recognized as the world’s most sustainable company, driving impact across climate, resources, and digital innovation.

As AI fuels a $7 trillion-dollar infrastructure boom, Arm’s Mohamed Awad reveals how efficiency, custom silicon, and ecosystem-first design are reshaping hyperscalers and powering the gigawatt era.

Leading up to Yotta 2025, we discussed compute efficiency with Cliff Federspiel of Vigilent, which is pioneering the use of IoT and AI to deliver dynamic cooling management in mission-critical environments.

CEO Lisa Spelman explains how tackling hidden inefficiencies in AI infrastructure can drive enterprise adoption, boost performance, and spark a new wave of innovation.

Explore myths, metrics, and strategies shaping the future of energy-efficient data centers with Solidigm’s Scott Shadley, from smarter drives to sustainability-ready architectures.

New Synopsys.ai Copilot capabilities deliver 30% faster engineer onboarding and 35% productivity gains, while Microsoft partnership reveals autonomous design agents on the horizon.

Equinix’s Glenn Dekhayser and Solidigm’s Scott Shadley join TechArena to unpack hybrid multicloud, AI-driven workloads, and what defines a resilient, data-centric data center strategy.

Bradbrook shares principles behind Antillion’s edge platforms—usability, fast iteration, real-world testing—and why the metric that matters most is durable value: tech that still works a decade later.

Industry leader Scott Shadley reveals how Solidigm’s innovations in SSDs, partnerships, and architecture are reshaping data centers to meet the rising demands of AI, edge, and enterprise workloads.

From feeding data-hungry GPUs to enabling real-time on-set visual effects, flash storage has evolved from luxury to necessity in modern content creation pipelines.

MLCommons launches MLPerf Automotive v0.5, the first standardized benchmark suite to measure real-world AI performance in safety-critical automotive applications.

From predicting sepsis before symptoms appear to enabling rural clinics to make specialist-level diagnoses, a privacy-first approach to AI in health care promises to transform lives.

Surveying 250 IT pros, we found 29% already run SSDs beyond performance tiers, 81% would migrate when TCO wins, and storage innovation is a top lever to free power and space across the data center.

From Intel’s layoffs to stealth automation, AI is reshaping work at a pace that outstrips human adaptation—driving record stress, uneven gains, and a scramble to reskill before the next downturn hits.

Allyson Klein and Robert Blum of Lightwave Logic unpack how electro-optic polymers, paired with silicon photonics, lower power and boost density on the road to 400G-per-lane optics— with a 2027 volume ramp in sight.

From federated learning and zero-trust to confidential computing, Dr. Rohith Vangalla shares a practitioner’s playbook for explainable, scalable AI that moves healthcare from reactive to proactive.

Our flagship podcast earned a Stevie® in the International Business Awards® annual competition; judges called out the show’s high production quality, editorial clarity, and guest caliber.

Permission Agent turns user-approved signals into auditable datasets—paying contributors in $ASK—so teams can train and personalize AI with verifiable consent and enforceable revocation.
Sign up for our monthly newsletter!