VAST Data reveals more of their AI strategy with collaboration announcements with NVIDIA and Supermicro.
TechArena’s take on VAST Data from AI Field Day and how TechArena readers can expect to hear a lot more from the company in the months ahead.
Arm is making headway in delivering an architecture alternative to the data center. Read TechArena’s take on their progress from technology enabling to building a true ecosystem for the workload requirements for the next decade.
TechArena’s take on the Ultra Ethernet Consortium Open Compute Project collaboration announcement and what it means for AI clusters.
TechArena’s quick take on WEKA’s discussion from Cloud Field Day.
TechArena’s Cloud Field Day report on AMD’s strategy for cloud native computing.
What modern storage really means, how on-prem arrays compare to first-party cloud services, and a clear checklist to pick the right fit for cost, control, scalability, and resilience.
As part of Flex, JetCool is scaling its microconvective cooling technology to help hyperscalers deploy next-gen systems faster, streamlining cooling deployments from server to rack in the AI era.
Ventiva discusses how hard-won laptop cooling know-how can unlock inside-the-box gains for AI servers and racks—stabilizing hotspots, preserving acoustics, and boosting performance.
At GTC DC, NVIDIA outlined DOE-scale AI systems, debuted NVQLink to couple GPUs and quantum, partnered with Nokia on AI-RAN to 6G, mapped Uber robotaxis for 2027, and highlighted Synopsys’ GPU gains.
Design shifted to rack-scale. Power and cooling span the full path. Liquid is table stakes. Three takeaways from OCP 2025—and why CelLink’s PowerPlane fits an AI-factory mindset.
Traditional data protection becomes the bottleneck when GPU idle time costs millions. Joint testing with Solidigm shows how next-generation solutions maintain full speed during drive failures.
AMI CEO Sanjoy Maity joins In the Arena to unpack the company's shift to open source firmware, OCP contributions, OpenBMC hardening, and the rack-scale future—cooling, power, telemetry, and RAS built for AI.
Graid Technology takes on Intel VROC licensing for data center and workstation customers, extending its RAID portfolio to offer both CPU-integrated and GPU-accelerated solutions.
Ventiva CEO Carl Schlachte joins Allyson Klein to share how the company’s Ionic Cooling Engine is transforming laptops, servers, and beyond with silent, modular airflow.
Discover how JetCool’s proprietary liquid cooling is solving AI’s toughest heat challenges—keeping data centers efficient as workloads and power densities skyrocket.
Allyson Klein hosts Manu Fontaine (Hushmesh) and Jason Rogers (Invary) to unpack TEEs, attestation, and how confidential computing is moving from pilots to real deployments across data center and edge.
Allyson Klein and Jeniece Wnorowski welcome Mohan Potheri of Hypertec to explore how immersion cooling slashes energy use, shrinks data-center footprints, and powers sustainable, high-density AI, HPC, and edge solutions on this Data Insights episode.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.
Innovative power delivery unlocks a shift in data-center design. CelLink PowerPlane routes thousands of amps in a flat, flexible circuit—cutting cabling and accelerating AI factory builds.