A single holographic projection of a complete data center facility floating above a dark charcoal #181E29 reflective surface
Rachel Horton
@
TechArena
Apr 29, 2026

OCP EMEA Summit 2026: The AI Data Center Gets Its Blueprint

The Open Compute Project Foundation’s 2026 OCP EMEA Summit landed in Barcelona this week with more than 2,000 attendees, up from 1,100 in Dublin last year and 700 in Lisbon the year before that. Seventy percent of the crowd was European.

The organization turns 15 this year, and the milestone arrived not with nostalgia but with an agenda that felt like a sprint. OCP formalized its Open Data Center Ecosystem Vision last October, an expanded strategic framework stretching from the power grid to the silicon die. (We broke down what that vision includes heading into the summit.) The keynotes in Barcelona made clear just how fast that framework is converting into engineering.

The Fungible Data Center Takes Shape

Google’s keynote by Amber Huffman anchored the morning with a demand signal that explains why the room was full: Gemini customers now consume 16 billion tokens per minute through direct API use alone. All seven of Google’s two-billion-user products run Gemini models, as do all 15 of its half-billion-user products.

That volume drives Google’s push for what it calls the “fungible data center,” infrastructure built to accommodate TPUs, third-party GPUs, optical circuit switching, and liquid cooling interchangeably. The goal is a building that can absorb new hardware generations without a rebuild. When Google posed the question to the industry last October, more than 50 organizations signed on. Six months later, the collaborative has published a 0.5 specification and is working toward 1.0 with AMD, Meta, and NVIDIA contributing. Google framed OCP’s pace as “specs at product speed,” targeting the 80% of shared industry requirements and reaching alignment within three to nine months.

Sustainability ran through the presentation, too. Product category rules for carbon measurement, developed with AWS, Meta, and Microsoft, define physical scope down to individual PCBAs so that lifecycle comparisons are apples to apples. A clean backup power initiative through the Net Zero Innovation Hub covers PEM fuel cells, long-duration energy storage, and clean fuel generators. Google is also backing an evaluation framework for low-carbon concrete through the OCP Academy.

Power Architecture Hits an Inflection Point

NVIDIA’s Javier Peman put the forcing function on screen. Rack density sits at roughly 150 kW today. By year end, it climbs to 250 kW. Next year, 650 kW. Jensen Huang has already announced the 1 MW rack. That trajectory, measured in just three years, rewrites every assumption about how power reaches compute.

At 48 volts, converters and busbars consume so much rack space that accelerators get squeezed out. Google’s Mount Diablo project, built with Meta and Microsoft, addresses this with 800 VDC delivery through a sidecar rack. The sidecar handles AC-to-DC conversion, and every rack unit in the compute cabinet stays dedicated to GPUs or TPUs.

Schneider Electric’s Sebastien Cruz Mermy mapped the longer arc. Sidecar configurations handle up to roughly 1 MW and represent the immediate, deployable step. Centralized low-voltage DC converters cover the 1 to 5 MW range. Beyond 5 MW, solid-state transformer technology becomes the path forward, with centralized high-voltage DC distribution. His most important message: this is not just a conversion problem. Protection, distribution, and conversion must be engineered and qualified as a single system. That full-system thinking was the most consequential architectural point of the morning.

Liquid Cooling Is Now the Baseline

NVIDIA’s Peman left no ambiguity on cooling.

“We forget about air cooling. It’s gone,” he said. “If you ever want to work with NVIDIA, it’s going to have to be DLC.”

Google’s Project Deschutes hit its final 1.0 specification, incorporating the Redmond Hot Aisle containment system for high-density liquid-cooled rack deployments. Six vendors delivered prototypes within one quarter of the earlier 0.8 spec release.

The NeoCloud panel drove the point home with production deployments, not prototypes. Denvr Dataworks has shrunk an H200 SXM server from 6U to 3OU using immersion cooling, fitting 14 H200 GPUs per rack where traditional air-cooled configurations support two to four. They have been running those servers in immersion for two years. Crusoe, meanwhile, is standardizing on closed-loop, non-evaporative systems at every new site it develops.

NeoClouds Find Their Footing Through OCP

A panel featuring Scaleway, Denvr Dataworks, Crusoe, and FarmGPU revealed the NeoCloud as a distinct infrastructure tier. At GTC earlier this year, Jensen Huang noted that 60% of GPUs ship to hyperscalers and 40% to everyone else. The panelists represent that “everyone else,” and they are building with very different playbooks.

Scaleway bets on European sovereignty and deep customer co-design. Denvr controls the full vertical stack from facility to cloud portal. Crusoe operates as an energy-first company, integrated from electrons to tokens. FarmGPU is placing its chips on AI storage, predicting 16 TB per GPU in demand for inference workloads in the Rubin generation, alongside confidential compute using Intel TDX and AMD SEV.

When asked which OCP standard has had the most tangible impact, every panelist gave the same answer: ORV3. The open rack spec delivers copy-paste deployability across sites, multi-generation hardware lifecycle planning, and supply chain confidence through volume standardization. Crusoe’s Peter Sheh called standardization “key to speed” at scale. Scaleway’s Yann-Guirec Manac’h captured the simplicity: “One question. Is it open rack? Yes, no. That’s it.”

Manac’h surfaced the tension those growth metrics don’t capture.

“Our main constraint is access to power,” he said. Connecting to the European grid at tens-of-megawatt scale remains complicated and slow, and the OCP hardware supply chain is entirely import-dependent. “Nobody's building in Europe, so we have to import everything.” Manufacturing sovereignty is the gap no spec can close.

Europe’s Moment, and Its Gap

The European growth story at OCP is real. Twenty-three percent of the organization’s membership is now European, 38% of its startup members are based in the region, and IDC projects $15 billion in OCP-recognized equipment deployed across Europe by 2029. NVIDIA is pushing adoption through two tracks: AI factories that repurpose national research centers for applied AI work with startups and SMBs, and Giga factories, purpose-built AI hyperscale data centers. Five to seven Giga factories are planned for Europe, though timelines have slipped past their original 2025 targets.

The TechArena Take

Going into Barcelona, we described the ODCE Vision as OCP’s bid to become structurally relevant at every layer of the AI infrastructure stack, grid to chip. The keynotes confirmed the ambition is sticking.

What the stage added that the specs alone cannot convey is urgency. NVIDIA’s rack density roadmap compresses every planning timeline in the industry. The power architecture conversation shifted from white papers to multi-vendor engineering, with Mount Diablo and Schneider’s system-level framework both presenting concrete paths. The NeoCloud panel proved that OCP specifications are translating into real deployment velocity outside hyperscale walls.

One moment stood out. Google observed that the combined standards corpus across Ethernet, PCIe, NVMe Express, DDR5, and OCP’s 200-plus work streams now exceeds 10,000 pages. Reading it all would take 370 hours. Google’s proposed solution: use AI to read the full corpus, catch conflicts across specifications, and track every decision and version. AI-assisted standards development, the speaker argued, is “a substantial opportunity for our industry.” When the complexity of building AI infrastructure has itself become an AI problem, you know the scale of what this community is trying to manage.

Subscribe to Our Newsletter

Read the latest in the world of AI, data center, and edge innovation.