
Equinix and Solidigm Map AI’s Ripple Effects Across Tech
The artificial intelligence “surge” is here, and with it, data center infrastructure is fundamentally changing. From increased demand for liquid cooling to a rise in interest for co-location services, both bleeding-edge and well-established solutions are being called into play to answer AI’s appetite for power, cooling, and data access. I recently explored this phenomenon with Glenn Dekhayser, global principal technologist at Equinix, and Scott Shadley, leadership marketing director at Solidigm, to understand how enterprises are navigating AI’s infrastructure demands.
The fundamental challenge facing enterprises deploying AI can be summarized in two words, according to Glenn: “power and heat.” Where graphics processing units (GPUs) once served as passengers in computing architectures, they now drive the entire infrastructure bus, and the power they demand is staggering. An estimated 97gigawatts of new power will be required for all registered data center projects by 2030, and when one average nuclear reactor puts out 1.4 gigawatt, “The math doesn’t work,” as Glenn said.
These power requirements create cascading effects throughout the infrastructure stack. Glenn noted that not every organization needs the highest-performance solutions, and liquid cooling becomes necessary only when rack density exceeds 40 kilowatts (kW). However, when direct-to-chip liquid cooling can capture and dissipate up to 1,000 times the amount of heat that air can, enterprises deploying next-generation GPUs must confront the requirements to implement this technology. This transition fundamentally changes total cost of ownership calculations and operational complexity for new infrastructure investments.
The full effects of AI’s increasing prevalence remain to be felt, but as Glenn noted, AI is driving change across all industries. While generative AI has advanced adoption so that “everybody has a need for” AI now, different forms of machine learning are also being used to solve domain-specific problems. “Every conversation has some angle to it,” he noted. “Whether you’re to be a consumer or provider or some middle service provider for data.”
Beyond dealing with power and heat, enterprises are adopting a mix of infrastructure strategies to adapt to the needs of AI workloads. For example, contrary to predictions that AI would centralize workloads in public clouds, enterprises are increasingly turning to co-location for AI deployments. The driving factors extend beyond simple cost considerations. The rapid pace of AI innovation means that the public cloud provider chosen two years ago may not offer the optimal AI services, capacity, or specialized models an organization needs today. Co-location provides the connectivity and flexibility to leverage various services without vendor lock-in, while maintaining control over data sovereignty and performance.
Perhaps nowhere is AI’s impact more visible than in storage strategy. Scott outlined how different AI workloads create distinct storage demands, and how innovative techniques can make the most of storage in all portions of the AI pipeline. For example, Solidigm has shown that retrieval-augmented generation (RAG) tasks can be offloaded from expensive distributed random-access memory (DRAM) to optimized solid-state drives (SSDs), and that performance can actually be improved while lowering costs by doing so.
Both experts emphasized that successful AI infrastructure requires holistic thinking across compute, storage, and interconnect. Rather than isolated decisions, enterprises need integrated solutions that can adapt to rapidly changing requirements.
The TechArena Take
The AI infrastructure revolution is forcing enterprises to balance competing demands: the need for high-performance computing against power and cooling constraints, the desire for cutting-edge capabilities against cost considerations, and the requirement for agility against infrastructure investments. Equinix and Solidigm demonstrate how thoughtful collaboration can address these challenges through flexible, efficient solutions that scale from current needs to future requirements.
As AI continues its rapid evolution, organizations that invest in adaptable, well-connected infrastructure today, while maintaining control over their data platforms, will be best positioned to capitalize on tomorrow’s AI innovations. The key is not predicting exactly what AI will become, but building the foundation to respond quickly when it does.
For more insights on Equinix’s AI infrastructure solutions, visit blogs.equinix.com or connect with Glenn Dekhayser on LinkedIn. Learn more about Solidigm's AI-focused storage innovations at solidigm.com or reach out to Scott Shadley on social media platforms.
The artificial intelligence “surge” is here, and with it, data center infrastructure is fundamentally changing. From increased demand for liquid cooling to a rise in interest for co-location services, both bleeding-edge and well-established solutions are being called into play to answer AI’s appetite for power, cooling, and data access. I recently explored this phenomenon with Glenn Dekhayser, global principal technologist at Equinix, and Scott Shadley, leadership marketing director at Solidigm, to understand how enterprises are navigating AI’s infrastructure demands.
The fundamental challenge facing enterprises deploying AI can be summarized in two words, according to Glenn: “power and heat.” Where graphics processing units (GPUs) once served as passengers in computing architectures, they now drive the entire infrastructure bus, and the power they demand is staggering. An estimated 97gigawatts of new power will be required for all registered data center projects by 2030, and when one average nuclear reactor puts out 1.4 gigawatt, “The math doesn’t work,” as Glenn said.
These power requirements create cascading effects throughout the infrastructure stack. Glenn noted that not every organization needs the highest-performance solutions, and liquid cooling becomes necessary only when rack density exceeds 40 kilowatts (kW). However, when direct-to-chip liquid cooling can capture and dissipate up to 1,000 times the amount of heat that air can, enterprises deploying next-generation GPUs must confront the requirements to implement this technology. This transition fundamentally changes total cost of ownership calculations and operational complexity for new infrastructure investments.
The full effects of AI’s increasing prevalence remain to be felt, but as Glenn noted, AI is driving change across all industries. While generative AI has advanced adoption so that “everybody has a need for” AI now, different forms of machine learning are also being used to solve domain-specific problems. “Every conversation has some angle to it,” he noted. “Whether you’re to be a consumer or provider or some middle service provider for data.”
Beyond dealing with power and heat, enterprises are adopting a mix of infrastructure strategies to adapt to the needs of AI workloads. For example, contrary to predictions that AI would centralize workloads in public clouds, enterprises are increasingly turning to co-location for AI deployments. The driving factors extend beyond simple cost considerations. The rapid pace of AI innovation means that the public cloud provider chosen two years ago may not offer the optimal AI services, capacity, or specialized models an organization needs today. Co-location provides the connectivity and flexibility to leverage various services without vendor lock-in, while maintaining control over data sovereignty and performance.
Perhaps nowhere is AI’s impact more visible than in storage strategy. Scott outlined how different AI workloads create distinct storage demands, and how innovative techniques can make the most of storage in all portions of the AI pipeline. For example, Solidigm has shown that retrieval-augmented generation (RAG) tasks can be offloaded from expensive distributed random-access memory (DRAM) to optimized solid-state drives (SSDs), and that performance can actually be improved while lowering costs by doing so.
Both experts emphasized that successful AI infrastructure requires holistic thinking across compute, storage, and interconnect. Rather than isolated decisions, enterprises need integrated solutions that can adapt to rapidly changing requirements.
The TechArena Take
The AI infrastructure revolution is forcing enterprises to balance competing demands: the need for high-performance computing against power and cooling constraints, the desire for cutting-edge capabilities against cost considerations, and the requirement for agility against infrastructure investments. Equinix and Solidigm demonstrate how thoughtful collaboration can address these challenges through flexible, efficient solutions that scale from current needs to future requirements.
As AI continues its rapid evolution, organizations that invest in adaptable, well-connected infrastructure today, while maintaining control over their data platforms, will be best positioned to capitalize on tomorrow’s AI innovations. The key is not predicting exactly what AI will become, but building the foundation to respond quickly when it does.
For more insights on Equinix’s AI infrastructure solutions, visit blogs.equinix.com or connect with Glenn Dekhayser on LinkedIn. Learn more about Solidigm's AI-focused storage innovations at solidigm.com or reach out to Scott Shadley on social media platforms.



