Open collaboration just leveled up: OCP pushes shared specs from rack to data center—power, cooling, networking, and ops—so AI capacity can scale faster, with less friction and more choice.
Appointment to Open Compute Project Foundation board of directors, contribution of Foundation Chiplet System Architecture (FCSA) spec underscore Arm’s ascendency in hyperscale, AI data centers.
CelLink’s ultrathin flex harnessing ushers in a new era in compute infrastructure innovation, cutting cable volume by up to 90% and boosting density, reliability, and efficiency.
As AI workloads push storage power consumption higher, the path to true storage efficiency demands systems-level thinking including hardware, software, and better metrics for picking the right drives.
Dave Driggers, CEO of Cirrascale, breaks down what “compute efficiency” really means, from GPU utilization and TCO modeling to token-based pricing that drives predictable customer value.
From data center to edge, Arm is enabling full-stack AI efficiency, powering ecosystems with performance-per-watt optimization, tailored silicon, and software portability across environments.
Flex strengthens its direct liquid cooling portfolio with its acquisition of JetCool, adding their MIT-developed tech to Flex’s broad offerings across power, cooling and data center infrastructure.
JetCool's innovative liquid cooling targets hotspots directly, supporting AI's growing power needs while enhancing sustainability through efficient heat reuse and scalable solutions for data centers.
CoolIT Systems is driving efficiency and performance for AI in data centers with cutting-edge liquid cooling solutions.
Arm advances data center tech with power-efficient CPUs, modular chiplet designs, and the scalable Neoverse platform, supporting hyperscalers and AI workloads with sustainable, adaptive solutions.
This Halloween, TechArena explores 2025’s top spooky tech trends: zombie data haunting storage, AI hallucinations leading to false data, and cyborg advances redefining humanity!
The 2024 OCP Summit spotlighted performance, power, and standards in the AI era, delivering record attendance and more than one jaw-dropping announcement. Check out our top takeaways from the event.
Innovative power delivery unlocks a shift in data-center design. CelLink PowerPlane routes thousands of amps in a flat, flexible circuit—cutting cabling and accelerating AI factory builds.
Recorded live at OCP in San Jose, Allyson Klein talks with CESQ’s Lesya Dymyd about hybrid quantum-classical computing, the new Maison du Quantique, and how real-world use cases may emerge over the next 5–7 years.
CEO Carl Schlachte joins TechArena at OCP Summit to share how Ventiva’s solid-state cooling—proven in dense laptops—scales to servers, cutting noise, complexity and power while speeding deployment.
From OCP Summit San Jose, Allyson Klein and co-host Jeniece Wnorowski interview Dr. Andrew Chien (UChicago & Argonne) on grid interconnects, rack-scale standards, and how openness speeds innovation.
From the OCP Global Summit in San Jose, Allyson Klein sits down with Chris Butler of Flex to unpack how the company is collapsing the gap between IT and power—literally and figuratively.
From OCP Summit 2025, Kelley Mullick joins Allyson Klein and co-host Jeniece Wnorowski for a Data Insights episode on rack-scale design, hybrid cooling (incl. immersion heat recapture), and open standards.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.