
CelLink at OCP: 3 Key Takeaways for AI Factories
During CelLink’s first-ever exhibit at the Open Compute Project Foundation’s 2025 global summit, what struck me wasn’t a single system or new tech announcement. It was the massive scale and complexity of the systems on display in San Jose. Design considerations that once were limited to the inside of a chassis now span racks, pods, and even data halls. Power and cooling, data transmission, and commissioning strategies are being simultaneously developed into an integrated plan for massive data centers. For an electric vehicle supplier entering the data center market, there’s a clear signal: the data center universe needs system-level thinkers more than ever. Below are three takeaways I brought from San Jose.
1) The unit of design has shifted to the rack (and multi-rack)
For many years, the data center industry treated the server box as the center of gravity and the rack as a container. At OCP 2025, the gravitational center sat firmly at rack scale and beyond. Vendors weren’t just announcing parts; they were showing buyable racks and reference pods with standardized mechanics, predictable service windows, and modularity that assumes multi-vendor integration.
When the entire data center is the product, interoperability and collaboration are fundamental requirements, not one-off, after-the-fact engineering projects. Power distribution, liquid manifolds, fabric, and service clearances need to be engineered together, tested together, and delivered as a single, repeatable unit. That reduces onsite “glue work,” accelerates time-to-online, and makes capacity build-outs behave more like a supply chain problem instead of an R&D project.
2) Efficiency must span the whole power path, from the grid down to the GPU
Energy costs and utility constraints are hard walls that every operator contends with. The conversation used to stop at power supplies or busbars; now it spans the entire power path. From upstream skids and medium-voltage gear, through prefabricated power pods, into rack-level distribution and finally to the last few centimeters inside the server, every interface is being scrutinized for energy loss, complexity, and long-term reliability.
Two themes stood out. First, high voltage power delivery and fewer mechanical connections are becoming a design tenet, not just wishful thinking. Second, manufacturability is part of the efficiency story. If you can pre-integrate cleanly at the factory, you don’t just save electrons; you save weeks of field labor, re-terminations, and retests that sap schedules and budgets. Efficiency is both electrical and operational.
3) Liquid cooling is table stakes—now make it efficient and flexible
Every serious roadmap now assumes liquid cooling. The debate is no longer air vs. liquid; it’s which liquid strategy unlocks density and speed without limiting the facility to a shorter lifespan. Cold plate, immersion, hybrid approaches—they all have a place. But the winners will minimize facility disruption, maximize reliability, fit into standardized rack mechanics, and keep serviceability reasonable.
I heard operators ask a simple question: Will this let me scale up and scale out without ripping up what I just installed? Solutions that pair rack-native mechanics with straightforward commissioning, safe quick-disconnects, and clear maintenance windows are getting the nod.
What This Means for How we Build
For AI factories, we should co-design power, cooling, and interconnect with the same discipline that goes into a production line. Standard racks, standard interfaces, and predictable service envelopes. The upside is more than density, it’s repeatability. That’s how we compress power-on schedules and make capacity a planning exercise we can trust.
Just as important: It’s imperative that the industry reduce manual, low-value assembly steps across the chain. Every extra wire to cut, crimp, route, label, and verify is a schedule risk and represents a potential field failure. Automation at the component level matters, but the real leverage appears when you can eliminate multiple assembly steps in server assembly and data center build-out.
CelLink PowerPlane: Efficient Power—Coupled with Integrated Cooling
At CelLink, we are focused exclusively on power and cooling. Our PowerPlane product replaces bundles of discrete wires with a space-efficient, flexible laminated power circuit that can deliver high currents in a flat and ultrathin form factor. Instead of connecting motherboards together one cable at a time, motherboards are docked directly to the Powerplane with a simple and repeatable installation process.
This matters to a server manufacturer for several reasons:
Density without chaos. By consolidating conductors into a flat, designed path, we free up precious volume for more compute and signal transmission – the true value-add functions of a server. The PowerPlane makes it easier to place accelerators and signal connections where they need to be and maintain clear service windows.
Cooling-ready by design. Power and cooling shouldn’t fight each other for space. The design approach we take to deliver power in a very thin (<1mm) form factor can be tailored to align with liquid-cooling manifolds—keeping designs clean, minimizing interference, and opening the door to integrated power-and-cooling layers that enable a revolution in rack and system design.
Unlocking vertical power delivery. By delivering power and cooling to the backside of the motherboard, the PowerPlane provides efficient heat removal from vertical power voltage regulators, ensuring that the regulators operate in their most-efficient temperature range.
Stepping back, this isn’t just a different cable. It’s a revolutionary design concept that eliminates manual wiring and replaces it with a high-precision, repeatable, space-saving component.
The PowerPlane product is trademark CelLink.
During CelLink’s first-ever exhibit at the Open Compute Project Foundation’s 2025 global summit, what struck me wasn’t a single system or new tech announcement. It was the massive scale and complexity of the systems on display in San Jose. Design considerations that once were limited to the inside of a chassis now span racks, pods, and even data halls. Power and cooling, data transmission, and commissioning strategies are being simultaneously developed into an integrated plan for massive data centers. For an electric vehicle supplier entering the data center market, there’s a clear signal: the data center universe needs system-level thinkers more than ever. Below are three takeaways I brought from San Jose.
1) The unit of design has shifted to the rack (and multi-rack)
For many years, the data center industry treated the server box as the center of gravity and the rack as a container. At OCP 2025, the gravitational center sat firmly at rack scale and beyond. Vendors weren’t just announcing parts; they were showing buyable racks and reference pods with standardized mechanics, predictable service windows, and modularity that assumes multi-vendor integration.
When the entire data center is the product, interoperability and collaboration are fundamental requirements, not one-off, after-the-fact engineering projects. Power distribution, liquid manifolds, fabric, and service clearances need to be engineered together, tested together, and delivered as a single, repeatable unit. That reduces onsite “glue work,” accelerates time-to-online, and makes capacity build-outs behave more like a supply chain problem instead of an R&D project.
2) Efficiency must span the whole power path, from the grid down to the GPU
Energy costs and utility constraints are hard walls that every operator contends with. The conversation used to stop at power supplies or busbars; now it spans the entire power path. From upstream skids and medium-voltage gear, through prefabricated power pods, into rack-level distribution and finally to the last few centimeters inside the server, every interface is being scrutinized for energy loss, complexity, and long-term reliability.
Two themes stood out. First, high voltage power delivery and fewer mechanical connections are becoming a design tenet, not just wishful thinking. Second, manufacturability is part of the efficiency story. If you can pre-integrate cleanly at the factory, you don’t just save electrons; you save weeks of field labor, re-terminations, and retests that sap schedules and budgets. Efficiency is both electrical and operational.
3) Liquid cooling is table stakes—now make it efficient and flexible
Every serious roadmap now assumes liquid cooling. The debate is no longer air vs. liquid; it’s which liquid strategy unlocks density and speed without limiting the facility to a shorter lifespan. Cold plate, immersion, hybrid approaches—they all have a place. But the winners will minimize facility disruption, maximize reliability, fit into standardized rack mechanics, and keep serviceability reasonable.
I heard operators ask a simple question: Will this let me scale up and scale out without ripping up what I just installed? Solutions that pair rack-native mechanics with straightforward commissioning, safe quick-disconnects, and clear maintenance windows are getting the nod.
What This Means for How we Build
For AI factories, we should co-design power, cooling, and interconnect with the same discipline that goes into a production line. Standard racks, standard interfaces, and predictable service envelopes. The upside is more than density, it’s repeatability. That’s how we compress power-on schedules and make capacity a planning exercise we can trust.
Just as important: It’s imperative that the industry reduce manual, low-value assembly steps across the chain. Every extra wire to cut, crimp, route, label, and verify is a schedule risk and represents a potential field failure. Automation at the component level matters, but the real leverage appears when you can eliminate multiple assembly steps in server assembly and data center build-out.
CelLink PowerPlane: Efficient Power—Coupled with Integrated Cooling
At CelLink, we are focused exclusively on power and cooling. Our PowerPlane product replaces bundles of discrete wires with a space-efficient, flexible laminated power circuit that can deliver high currents in a flat and ultrathin form factor. Instead of connecting motherboards together one cable at a time, motherboards are docked directly to the Powerplane with a simple and repeatable installation process.
This matters to a server manufacturer for several reasons:
Density without chaos. By consolidating conductors into a flat, designed path, we free up precious volume for more compute and signal transmission – the true value-add functions of a server. The PowerPlane makes it easier to place accelerators and signal connections where they need to be and maintain clear service windows.
Cooling-ready by design. Power and cooling shouldn’t fight each other for space. The design approach we take to deliver power in a very thin (<1mm) form factor can be tailored to align with liquid-cooling manifolds—keeping designs clean, minimizing interference, and opening the door to integrated power-and-cooling layers that enable a revolution in rack and system design.
Unlocking vertical power delivery. By delivering power and cooling to the backside of the motherboard, the PowerPlane provides efficient heat removal from vertical power voltage regulators, ensuring that the regulators operate in their most-efficient temperature range.
Stepping back, this isn’t just a different cable. It’s a revolutionary design concept that eliminates manual wiring and replaces it with a high-precision, repeatable, space-saving component.
The PowerPlane product is trademark CelLink.



