
Scaleway’s AI Vision: Scalable, Sustainable Cloud Infrastructure
In a landscape often dominated by hyperscalers and familiar names, Scaleway is quietly rewriting the rules. With a clear vision and a sharp focus on what modern cloud scalers actually need, it’s stepping into a role that feels both timely and transformative. In a recent conversation between Solidigm’s Conor Doherty, field applications manager at Solidigm, and Yann-Guirec Manac’h, head of hardware R&D at Scaleway, we get a closer look at how this European cloud provider is not only keeping pace with hyperscale trends, but is also helping to shape them through focused innovation in AI infrastructure and sustainability.
Scaleway’s approach feels refreshingly grounded. It focuses on delivering a complete foundation — compute, network, storage — and binding it all together with a unified control plane that supports both traditional and AI workloads. But what really sets them apart is the dual-track strategy for AI: accessible GPU instances for smaller-scale use, and massive, tightly interconnected GPU clusters for the heavy-duty training jobs. It’s the kind of infrastructure that recognizes how AI work isn’t a one-size-fits-all operation — some tasks need 2 GPUs, while others demand thousands. Scaleway is designing for both.
Gaining a deeper understanding of Scaleway’s AI strategy starts with examining the data pipeline. As Yann-Guirec put it, AI training isn’t just compute-heavy — it’s about data complexity, scale and flow. Harvesting and curating vast datasets, handling throughput during training, managing checkpoints and doing inference all require different storage strategies. Cold storage for archival compliance, warm layers for preparation and hot storage for training — each has different hardware implications. It’s not just about speed, it’s about adaptability, and Scaleway’s infrastructure acknowledges that every phase in the AI pipeline has unique demands.
With the conversation around sustainability finally taking center stage in tech, Scaleway’s stance is more than a footnote — it’s core to their identity. Backed by the Iliad Group, the company has built and operates data centers that run on 100% renewable energy. DC5, the data center that houses a lot of their AI pods, forgoes traditional air conditioning in favor of adiabatic and free cooling methods. The result is dramatically lower power usage effectiveness without sacrificing performance. But Yann-Guirec takes it a step further, pointing out a rarely discussed metric: water usage. Scaleway is looking at water usage effectiveness as well, with a view toward responsible innovation that doesn’t overlook environmental cost.
What’s perhaps most fascinating is how Scaleway sees the future of AI training workloads. Today’s language models may be grounded in web-scale text, but tomorrow’s models — multimodal, agentic and domain-specific — will need exponentially more data across formats, such as images, audio and video. That means even more demand on both GPU throughput and the bandwidth feeding those GPUs. Scaleway is building toward this future now, with GPU pod systems capable of pushing hundreds of gigabits per second and storage systems built to scale with that need.
While big names in AI infrastructure often dominate the narrative, conversations like this remind us that serious innovation is happening beyond the usual suspects.
So, what’s the TechArena take? Scaleway isn’t trying to be everything to everyone – but for teams building sophisticated AI pipelines, in Europe and beyond, it’s quickly becoming a name to watch.
To dive deeper, visit scaleway.com.
In a landscape often dominated by hyperscalers and familiar names, Scaleway is quietly rewriting the rules. With a clear vision and a sharp focus on what modern cloud scalers actually need, it’s stepping into a role that feels both timely and transformative. In a recent conversation between Solidigm’s Conor Doherty, field applications manager at Solidigm, and Yann-Guirec Manac’h, head of hardware R&D at Scaleway, we get a closer look at how this European cloud provider is not only keeping pace with hyperscale trends, but is also helping to shape them through focused innovation in AI infrastructure and sustainability.
Scaleway’s approach feels refreshingly grounded. It focuses on delivering a complete foundation — compute, network, storage — and binding it all together with a unified control plane that supports both traditional and AI workloads. But what really sets them apart is the dual-track strategy for AI: accessible GPU instances for smaller-scale use, and massive, tightly interconnected GPU clusters for the heavy-duty training jobs. It’s the kind of infrastructure that recognizes how AI work isn’t a one-size-fits-all operation — some tasks need 2 GPUs, while others demand thousands. Scaleway is designing for both.
Gaining a deeper understanding of Scaleway’s AI strategy starts with examining the data pipeline. As Yann-Guirec put it, AI training isn’t just compute-heavy — it’s about data complexity, scale and flow. Harvesting and curating vast datasets, handling throughput during training, managing checkpoints and doing inference all require different storage strategies. Cold storage for archival compliance, warm layers for preparation and hot storage for training — each has different hardware implications. It’s not just about speed, it’s about adaptability, and Scaleway’s infrastructure acknowledges that every phase in the AI pipeline has unique demands.
With the conversation around sustainability finally taking center stage in tech, Scaleway’s stance is more than a footnote — it’s core to their identity. Backed by the Iliad Group, the company has built and operates data centers that run on 100% renewable energy. DC5, the data center that houses a lot of their AI pods, forgoes traditional air conditioning in favor of adiabatic and free cooling methods. The result is dramatically lower power usage effectiveness without sacrificing performance. But Yann-Guirec takes it a step further, pointing out a rarely discussed metric: water usage. Scaleway is looking at water usage effectiveness as well, with a view toward responsible innovation that doesn’t overlook environmental cost.
What’s perhaps most fascinating is how Scaleway sees the future of AI training workloads. Today’s language models may be grounded in web-scale text, but tomorrow’s models — multimodal, agentic and domain-specific — will need exponentially more data across formats, such as images, audio and video. That means even more demand on both GPU throughput and the bandwidth feeding those GPUs. Scaleway is building toward this future now, with GPU pod systems capable of pushing hundreds of gigabits per second and storage systems built to scale with that need.
While big names in AI infrastructure often dominate the narrative, conversations like this remind us that serious innovation is happening beyond the usual suspects.
So, what’s the TechArena take? Scaleway isn’t trying to be everything to everyone – but for teams building sophisticated AI pipelines, in Europe and beyond, it’s quickly becoming a name to watch.
To dive deeper, visit scaleway.com.