
VAST and CoreWeave Sign $1.17B Deal to Power AI Data Layer
VAST Data announced a $1.17 billion commercial agreement with CoreWeave that makes the VAST AI OS the primary data foundation for CoreWeave’s AI cloud, extending a collaboration that began when CoreWeave selected VAST to power its GPU cloud storage layer in 2023.
AI clouds are maturing from GPU-first builds to balanced, data-aware platforms that can keep training pipelines fed while serving real-time inference at scale. In that context, the data layer isn’t a bolt-on—it’s table stakes. VAST and CoreWeave are formalizing that reality in dollar terms and roadmap alignment.
The companies describe a multi-year commercial agreement that cements VAST as CoreWeave’s primary data platform. The release emphasizes instant access to massive datasets, reliability at cloud scale, and performance across both training and inference. It also highlights a “new class of intelligent data architecture” aimed at continuous training and real-time processing.
While detailed term length wasn’t disclosed, outside reporting characterizes the pact as multi-year and situates it within the broader generative-AI infrastructure build-out, noting VAST’s momentum and revenue trajectory this year.
How the Stack Comes Together
CoreWeave is known for GPU-accelerated infrastructure tailored for AI/ML, rendering/VFX, and other compute-intensive workloads. VAST’s AI OS consolidates data and compute services, with the company positioning its DASE architecture as a parallel distributed system designed to remove trade-offs between performance, scale, and resilience. In practical terms, the pitch is a single, scalable substrate that can be deployed across any CoreWeave data center to support both throughput-heavy training and latency-sensitive inference paths.
Two strategic threads stand out:
- Data locality and pipeline efficiency: Training clusters need sustained bandwidth and predictable data access; inference fleets need fast retrieval against ever-fresh models and features. Making VAST the primary data foundation is meant to reduce friction between those modes so customers can iterate faster with fewer data-movement penalties.
- Product-roadmap lock-step: The partners say they’ll co-deliver “sophisticated data services across the full stack,” which reads as deeper integration of storage, metadata, caching, and possibly agentic or workflow services directly in the platform—less glue code for customers, more out-of-the-box performance.
Why It’s Credible
This isn’t a green-field pairing. CoreWeave first named VAST as the data platform for its NVIDIA-powered AI cloud back in 2023, and since then both companies have scaled rapidly alongside enterprise AI adoption. Today’s announcement formalizes that relationship with a sizable commercial framework and sets expectations around platform primacy.
Market Context
AI infrastructure buyers are hunting for time-to-value: they want capacity that stands up quickly, sustains training throughput, and serves inference without spiraling costs. That’s pushing clouds—especially specialized “neo-clouds” like CoreWeave—to harden their data planes with predictable performance and global operability. The $1.17B figure signals that in the AI era, the data layer is where performance, reliability, and unit economics converge. External coverage also notes VAST’s broader customer footprint and fundraising signals, reinforcing the company’s position as more than a storage vendor—it’s pitching a full AI operating substrate.
What Customers Should Watch
- Performance under mixed workloads: Can the same data substrate deliver both the streaming throughput that training demands and the low-latency retrieval inference needs without costly duplication?
- Operational simplicity: If VAST’s AI OS truly consolidates services, expect fewer moving parts for customers and faster onboarding of new regions or clusters inside CoreWeave’s footprint.
- Economics at scale: The promise here includes “cloud-scale economics.” Keep an eye on storage-to-compute ratios, egress patterns, and the amount of data staging you can eliminate as pipelines evolve.
What’s Next
VAST says the partnership will continue to evolve with shared product development. An analyst community briefing with CEO Renen Hallak on Thursday, November 13, will unpack strategy implications and additional updates.
TechArena Take
The signal here isn’t just the dollar figure—it’s the architectural vote: CoreWeave is betting that a unified, software-defined data plane is indispensable to AI cloud differentiation. For VAST, “AI OS” stops being slideware and becomes contractually central to one of the most prominent AI clouds. The near-term win is customer experience—simpler pipelines, faster iteration, fewer knobs to turn. The longer-term implication is competitive pressure: hyperscalers and other neo-clouds will need similarly opinionated data stacks that erase the gaps between training and inference. If VAST and CoreWeave can translate this alignment into measurable SLA gains and lower delivered cost per token/frame/query, this deal will read as a blueprint for how AI clouds professionalize the data layer at scale.
VAST Data announced a $1.17 billion commercial agreement with CoreWeave that makes the VAST AI OS the primary data foundation for CoreWeave’s AI cloud, extending a collaboration that began when CoreWeave selected VAST to power its GPU cloud storage layer in 2023.
AI clouds are maturing from GPU-first builds to balanced, data-aware platforms that can keep training pipelines fed while serving real-time inference at scale. In that context, the data layer isn’t a bolt-on—it’s table stakes. VAST and CoreWeave are formalizing that reality in dollar terms and roadmap alignment.
The companies describe a multi-year commercial agreement that cements VAST as CoreWeave’s primary data platform. The release emphasizes instant access to massive datasets, reliability at cloud scale, and performance across both training and inference. It also highlights a “new class of intelligent data architecture” aimed at continuous training and real-time processing.
While detailed term length wasn’t disclosed, outside reporting characterizes the pact as multi-year and situates it within the broader generative-AI infrastructure build-out, noting VAST’s momentum and revenue trajectory this year.
How the Stack Comes Together
CoreWeave is known for GPU-accelerated infrastructure tailored for AI/ML, rendering/VFX, and other compute-intensive workloads. VAST’s AI OS consolidates data and compute services, with the company positioning its DASE architecture as a parallel distributed system designed to remove trade-offs between performance, scale, and resilience. In practical terms, the pitch is a single, scalable substrate that can be deployed across any CoreWeave data center to support both throughput-heavy training and latency-sensitive inference paths.
Two strategic threads stand out:
- Data locality and pipeline efficiency: Training clusters need sustained bandwidth and predictable data access; inference fleets need fast retrieval against ever-fresh models and features. Making VAST the primary data foundation is meant to reduce friction between those modes so customers can iterate faster with fewer data-movement penalties.
- Product-roadmap lock-step: The partners say they’ll co-deliver “sophisticated data services across the full stack,” which reads as deeper integration of storage, metadata, caching, and possibly agentic or workflow services directly in the platform—less glue code for customers, more out-of-the-box performance.
Why It’s Credible
This isn’t a green-field pairing. CoreWeave first named VAST as the data platform for its NVIDIA-powered AI cloud back in 2023, and since then both companies have scaled rapidly alongside enterprise AI adoption. Today’s announcement formalizes that relationship with a sizable commercial framework and sets expectations around platform primacy.
Market Context
AI infrastructure buyers are hunting for time-to-value: they want capacity that stands up quickly, sustains training throughput, and serves inference without spiraling costs. That’s pushing clouds—especially specialized “neo-clouds” like CoreWeave—to harden their data planes with predictable performance and global operability. The $1.17B figure signals that in the AI era, the data layer is where performance, reliability, and unit economics converge. External coverage also notes VAST’s broader customer footprint and fundraising signals, reinforcing the company’s position as more than a storage vendor—it’s pitching a full AI operating substrate.
What Customers Should Watch
- Performance under mixed workloads: Can the same data substrate deliver both the streaming throughput that training demands and the low-latency retrieval inference needs without costly duplication?
- Operational simplicity: If VAST’s AI OS truly consolidates services, expect fewer moving parts for customers and faster onboarding of new regions or clusters inside CoreWeave’s footprint.
- Economics at scale: The promise here includes “cloud-scale economics.” Keep an eye on storage-to-compute ratios, egress patterns, and the amount of data staging you can eliminate as pipelines evolve.
What’s Next
VAST says the partnership will continue to evolve with shared product development. An analyst community briefing with CEO Renen Hallak on Thursday, November 13, will unpack strategy implications and additional updates.
TechArena Take
The signal here isn’t just the dollar figure—it’s the architectural vote: CoreWeave is betting that a unified, software-defined data plane is indispensable to AI cloud differentiation. For VAST, “AI OS” stops being slideware and becomes contractually central to one of the most prominent AI clouds. The near-term win is customer experience—simpler pipelines, faster iteration, fewer knobs to turn. The longer-term implication is competitive pressure: hyperscalers and other neo-clouds will need similarly opinionated data stacks that erase the gaps between training and inference. If VAST and CoreWeave can translate this alignment into measurable SLA gains and lower delivered cost per token/frame/query, this deal will read as a blueprint for how AI clouds professionalize the data layer at scale.



