Explore the cutting edge of computing from data center to edge including solutions unlocking the AI pipeline, all backed by Solidigm's leading SSD portfolio.



Join host Allyson Klein and co-host Jeniece Wnorowski in this episode of Data Insights as they chat with Gigabyte's Chen Lee about AI innovations and the future of server technology at OCP Summit.

Live from OCP Summit 2024, this Data Insights podcast explores how Ocient’s innovative platform is optimizing compute-intensive data workloads, delivering efficiency, cost savings, and sustainability.

Join Allyson Klein and Jeniece Wnorowski as they chat with Eddie Ramirez from Arm about how chiplet innovations and compute efficiency are driving AI and transforming data center architecture.

Learn how CoolIT Systems is driving efficiency and performance in AI and data centers with cutting-edge liquid cooling solutions in our latest Data Insights podcast.

Jeniece Wronowski and I recently got the chance to sit down with Gregory Lebourg of OVHcloud, a major European cloud provider that’s been making significant strides in the global cloud market. With a focus on sustainability, data sovereignty, and competitive pricing, OVHcloud is challenging carving out a space that’s distinctly European. Our conversation delved into OVHcloud’s unique approach, their mission, and the trends they see shaping the future of cloud computing.
One of the first things that Gregory emphasized was OVH’s identity as a European service provider. While the cloud market is dominated by American and Chinese giants, OVHcloud stands out as a provider deeply rooted on the continent, adhering to European standards and business practices. This isn't just about where they’re based, but about how they operate. Data sovereignty is at the core of their operations, and OVHcloud ensures that customer data remains protected under European regulations, providing a significant advantage for businesses looking to avoid the complexities of non-EU data jurisdiction.
Gregory leads OVH’s sustainability practices, and in terms of environmental impact, OVHcloud is setting a very high bar. They’ve adopted a circular economy approach, focusing on minimizing waste and optimizing resource efficiency across everything they do. In our chat, Gregory shared that OVH data centers are equipped with custom water-cooling systems that reduce energy consumption by up to 50% compared to traditional air-cooling methods. This innovative approach has earned them an impressive PUE (Power Usage Effectiveness) rating of around 1.1 across most of their facilities, which is significantly better than industry averages. But that’s not all. They also use refurbished servers, which helps them keep costs low while reducing their carbon footprint. OVHcloud’s data centers operate on a massive scale, with more than 400,000 servers across 33 data centers globally. Despite this scale, they’ve managed to maintain competitive pricing without compromising on performance, a feat they attribute to their sustainability practices and vertically integrated supply chain.
We also had a chance to discuss pricing with Gregory, and it became clear that OVHcloud’s commitment to affordability is about more than just competing with other cloud giants — it’s part of their mission to democratize cloud access. By controlling their supply chain and building their servers in-house, they’re able to offer services at 20%-50% lower costs than the major competitors. This cost advantage has been crucial in helping small and medium-sized businesses access high-performance cloud services that might otherwise be out of reach.
One area where OVHcloud is particularly focused is in supporting multi-cloud strategies. Businesses are increasingly looking for flexibility in their cloud environments, and OVHcloud has responded by offering a range of services that can integrate seamlessly with other providers. This approach provides customers with more choices and enables them to build cloud architectures that suit their unique needs.
In today’s digital landscape, data security and privacy are critical concerns. OVHcloud takes a strong stance on data sovereignty, a major selling point for European customers wary of foreign jurisdiction over their data. They’ve also aligned their services with GDPR (General Data Protection Regulation) requirements, which gives customers the peace of mind that their data is protected according to some of the strictest standards in the world.
During our chat, Gregory underscored their commitment to transparency and compliance. They’re actively involved in initiatives like GAIA-X, which aims to create a federated and secure data infrastructure for Europe. This aligns with OVHcloud’s long-term vision of building a robust digital ecosystem in Europe that champions trust and user control over data.
When it comes to future technologies, Gregory shared that OVH is keeping its eyes on the horizon. They’re particularly interested in quantum computing and AI, areas they believe will transform the cloud landscape in the coming decade. Their partnership with the French government on the Plan Quantum initiative exemplifies their proactive approach to these technologies. As Gregory sees it, quantum computing holds the potential to revolutionize data processing and encryption, making it a game-changer for sectors like finance, healthcare, and defense. Meanwhile, OVH is investing in AI-driven tools that will enhance cloud services, offering more intelligent insights and automation options for customers.
So what’s the TechArena take? After our conversation, I walked away with a sense that OVHcloud is setting a very high standard for innovative cloud services, designed for their market and aimed at delivery with sustainability in mind. Services are high-performance, affordable, and sustainable, reflecting European customer priorities. Their commitment to data sovereignty is particularly timely, as businesses are becoming increasingly pressured to manage these aspects to keep aligned with government regulations. OVHcloud’s approach is refreshing in a market dominated by a few powerful players. For businesses in Europe and beyond, OVHcloud is proving to be a compelling alternative to the usual suspects, and I’m excited to see how they continue to evolve in the years to come.
Listen to the full conversation with OVHcloud here.

During this episode of Data Insights sponsored by Solidigm, Grégory Lebourg – Global Environmental Director at OVHcloud – discusses how companies can meet their environmental goals effectively.

I love hearing from providers on how they’re grappling with delivery of cloud services to support customer adoption of AI. Jeniece Wronowski and I got that chance in a recent episode of our Data Insights podcast when we hosted Ian McClarty, President of PhoenixNAP, for a deep dive into the evolving role of AI in data centers and how bare metal cloud is meeting the demand for infrastructure that’s up to the AI performance challenge.
Our conversation started with Ian sharing his view on the enormous impact AI is having on data centers and the unique demand they bring to both operators and their customers. They require immense compute power as well as low latency communication, putting significant strain on traditional cloud infrastructure. Ian pointed out how the explosion of data—from IoT devices, streaming, and cloud applications—continues to fuel the AI boom, and that AI can’t be treated as another workload in the data center. It demands a completely fresh approach to data center infrastructure, something Ian and his team at PhoenixNAP are laser-focused on providing.
Ian then turned to bare metal cloud offerings, something PhoenixNAP is famous for delivering, and how they are particularly suited to meet AI’s growing infrastructure needs. Unlike typical cloud solutions that share resources, bare metal cloud provides dedicated servers that give companies access to raw, non virtualized hardware. This is key, Ian explained, for resource- hungry AI workloads. Companies working on AI algorithms need the ability to quickly scale, spin up resources on demand, and process huge amounts of data—capabilities that bare metal cloud supports seamlessly.
Ian highlighted three key advantages to this approach: performance, scalability, and control. In traditional virtualized environments, AI workloads can face latency issues or performance bottlenecks due to resource sharing. In addition, bare metal cloud allows for rapid scaling, whether a company needs to deploy a few servers for small-scale training or dozens for large AI models. The infrastructure can be customized and scaled up or down based on demand providing flexibility that is crucial as AI workloads can vary significantly in terms of compute power needed. Control is equally important, and Ian stressed how organizations want more control over their infrastructure when it comes to AI. With bare metal cloud, companies have the freedom to configure the hardware environment to suit their specific needs, which is especially important for workloads involving sensitive or proprietary data. This level of customization and control just isn’t possible in shared cloud environments.
As we love to do on the Data Insights series, we turned the conversation to sustainability. With recent reports placing energy consumption of data centers forecasted to represent up to 20% of the world’s energy supply due to the rise of AI, operators are grappling with driving efficiency across every vector of computing. Ian acknowledged the industry’s responsibility to address the environmental impact and noted that PhoenixNAP is taking proactive steps to design data centers with energy efficiency in mind, from improving cooling technologies to optimizing server utilization. PhoenixNAP is also exploring renewable energy utilization to power their facilities. While the journey to a fully sustainable data center is ongoing, the strides they’re making are encouraging. Ian believes that future innovations in both hardware and software will make sustainability not just an add-on but a core feature of high-performance computing environments.
The conversation made it clear that PhoenixNAP is primed for infrastructure transformation to support AI’s growth. The company’s focus on performance, flexibility, and sustainability positions it uniquely to meet the challenges and opportunities that AI presents. I left the conversation energized about the possibilities bare metal cloud offers for AI innovation and the impact it will have across industries.
Tune in to the full episode for more insights from Ian and how PhoenixNAP is reshaping the future of data centers.
.webp)
During our latest Data Insights podcast, sponsored by Solidigm, Ian McClarty of PhoenixNAP shares how AI is shaping data centers, discusses the rise of Bare Metal Cloud solutions, and more.

Kelley Osburn gets storage. As an industry veteran and leader at Graid Technology, Kelley recently shared his insights on how the storage arena is rapidly transforming to fuel AI workloads and how his company’s SupremeRAID™ solution – a revolutionary approach to tackling modern data storage challenges – is hitting a sweet spot in the market.
So why is traditional RAID no longer sufficient? Kelley explained how these configurations struggle with high-performance computing demands, especially in data-intensive environments. He emphasized the need for innovation in data storage as the exponential growth in data continues to challenge existing systems, explaining that RAID's original purpose was to provide redundancy and protection against disk failures. While this redundancy is still valued, it lacks the performance desired by many customers.
As data stores grow and speed-of-delivery of data becomes more urgent, innovation to the approach helps extend RAID solution viability while meeting customer demand. Graid's SupremeRAID™ solution, for example, optimizes storage performance by offloading RAID tasks to a dedicated hardware device, enhancing speed and efficiency without compromising data integrity. This makes it an ideal solution for customers managing massive amounts of data for applications like AI, machine learning, and big data analytics.
Kelley detailed the core value of Graid’s solution, describing how SupremeRAID™ addresses critical bottlenecks in traditional storage systems by offering unprecedented performance gains while reducing the computational load on CPU and system resources. The innovation lies in its architecture, which integrates both hardware and software in a way that eliminates RAID-specific processing burdens from the host server, thus allowing server resources to focus on other tasks. The result is a solution that dramatically improves throughput and reduces latency, creating a more balanced and efficient data environment.
In addition to AI and ML, SupremeRAID™ also proves to be a valuable tool in applications in media and entertainment, where high-resolution content creation, editing, and rendering demand significant data processing power. Its ability to handle these workloads without compromising performance makes it a game-changer for companies managing large data sets.
Industry Implications and Future Outlook
So what are the broader implications of Graid's innovations for the storage industry? Kelley explained that as companies continue to generate vast quantities of data, the demand for more efficient and performant storage solutions will only grow. Graid’s SupremeRAID™ is positioned to address these challenges head-on, providing enterprises with the tools they need to manage, protect, and access their data faster and more reliably than ever before. This will rely on underlying storage media delivering the performance and density required for these tasks, and Kelley pointed to Graid’s strategic collaboration with Solidigm as an example of how high performance QLC memory delivers unique value to customers.
Looking to the future, Graid plans to continue evolving its technology to meet the ever-increasing demands of the data economy. As data volumes grow, so too will the need for innovative storage solutions that can handle not just the size, but the speed and complexity of modern workloads. SupremeRAID™ represents a critical step in that direction, offering a glimpse into the future of RAID technology and its role in addressing the data challenges of tomorrow.
Want to learn more? Check the episode here.

Join Allyson Klein and Jeniece Wnorowski as they chat with Rita Kozlov from Cloudflare about their innovative cloud solutions, AI integration, and commitment to privacy and sustainability.

Allyson Klein and Jeniece Wnorowski chat with Kelley Osburn of Graid about SupremeRAID™ and its role in tackling high-performance storage challenges in data-driven environments.

With GPU-driven AI training ruling the moment, we have finally come to the asymptotic moment for liquid cooling to overtake air cooled data center infrastructure for many environments. Consider, for a moment, that NVIDIA Blackwell-based racks are drawing from 60kW to 120kW per rack, a dramatic shift from the historic 5-10kW per rack delivered to fuel general purpose applications. When you extrapolate that power across football fields of racks for a hyperscale training cluster, you realize that there’s a LOT of heat to extract. The debate has quickly shifted from air vs liquid to what type of liquid to utilize, opening the door for market disruption and new player entry.
This is why I was so excited to talk to Dr. Kelley Mullick, vice president of technology advancement at Iceotope. Kelley joined Iceotope, a Sheffield, England-based immersion cooling startup, last year, bringing with her a technology leadership pedigree and the notable achievement of having delivered the first industry liquid cooling warranty while at Intel in 2022. Her PhD in chemical engineering and lengthy engagement in industry standards work places her squarely in the middle of liquid cooling advancement.
So why liquid cooling? Kelley confirmed that AI is the primary driver for urgency in transition to liquid cooling due to its serial computing nature, but also stated that broader commitments to sustainability have driven hyperscalers to consider liquid alternatives. She outlined the three alternatives in play in the liquid market: cold plate, tank immersion and precision liquid cooling. While all are more effective and efficient than air, each of the alternatives offer different advantages for consideration. Cold plate has the advantage that it has been widely deployed in HPC environments and utilizes air to cool parts of the chassis where liquid plates are not uniquely targeted, supporting retrofit opportunities for existing infrastructure. Tank immersion delivers a solution where heat can be captured for secondary usage but is also delivered at a weight that requires reinforcement of flooring in existing data center tile flooring, likely limiting to greenfield buildouts. Finally, precision liquid is somewhat of a hybrid, offering advantages of immersion cooling with alternative chemistries to water and similarities to cold plate, offering deployment in existing vertical racks.
If this complexity wasn’t enough, there’s also the topic of chemistry, and it’s here that Kelley really lit up. To start, the options for liquid cooling are water (used in cold plate designs) and dielectric fluid (used in cold plate, immersion, and precision designs). Dielectric fluid is composed of hydrocarbon or fluoridated hydrocarbon fluid with most vendors targeting hydrocarbon options because of its non-toxic composition and ability to be recycled. For two phase cooling solutions, however, only fluoridated hydrocarbon solutions can be used, introducing toxic chemicals into the data center and representing increased challenges from a circularity perspective.
Iceotope is delivering a pretty special chemistry within this landscape. Kelley explained that solutions are delivering precision cooling at up to 1500 watts with thermal resistance 0.037 Kelvin/watt, at par with fluoridated solutions with a sustainable and environmentally friendly chemistry. This technology is delivered in adaptable form factors including racks, power shelves and more, enabling customers to deploy across data center and edge environments. Kelley also noted that different types of infrastructure from GPUs and CPUs to storage JBODs can be submerged in dielectric fluid. Iceotope has done extensive testing of material compatibility to ensure customer deployments will keep cool without reliability erosion.
What’s the TechArena take? We were delighted that we were able to feature this story on our Data Insights series sponsored by Solidigm as cooling is critical to delivery of the data pipeline. Iceotope is delivering disruptive technology in this space, and I expect to hear much more about their solutions as we head into the OCP Summit this fall. If liquid cooling is not on your radar today…put it on your radar. With hyperscalers moving rapidly to liquid alternatives, we expect solutions to scale to meet edge requirements and broader scale AI configurations in data centers. To learn more, check out the interview and visit Iceotope’s site.

In a recent episode of the TechArena Data Insights series, my co-host Jeniece Wnorowski and I had an insightful conversation with Ariel Pisetzky, the Vice President of Information Technology and Cyber at Taboola, about the transformative impact of data and AI on ad placement. Our discussion revealed how these advanced technologies are redefining the advertising landscape, making ad placements more efficient and targeted to business objectives.
The Shift to AI-Driven Advertising
Ariel emphasized that at Taboola, the mission is to "connect people with content they may like but never knew existed." This mission is powered by sophisticated algorithms that analyze user behavior, preferences, and context to deliver highly personalized content. Ariel noted, "Our systems are designed to process enormous amounts of data in real-time to understand user intent and deliver the most relevant ads." As someone whose business in part is driving micro-targeted paid media, I was delighted to learn from Ariel about what he and the team at Taboola are delivering.
I was also delighted to hear that AI is at the core of Taboola's strategy for ad placement. By utilizing machine learning models, Taboola can predict which ads are most likely to engage individual users. Ariel explained, "Our AI systems analyze vast amounts of data in real-time to understand user intent and preferences. This allows us to serve ads that are not only relevant but also engaging."
These AI algorithms take into account various factors, including browsing history, time of day, and even the type of device being used, ensuring that ads are placed in the optimal context, thus maximizing user interaction.
Dynamic and Contextual Ad Placements
One of the key innovations we discussed with Ariel is Taboola's approach to dynamic and contextual ad placements. Traditional ad placement strategies often rely on static parameters, but Taboola's AI-driven platform can adapt in real-time. For instance, if a user frequently reads tech blogs in the evening, Taboola's system might prioritize tech-related ads during that time frame.
Ariel highlighted this capability, stating, "Dynamic ad placements allow us to adjust the content based on immediate user context. This not only improves the user experience but also enhances ad performance for our clients."
Predictive analytics is another area where Taboola excels. By analyzing historical data and user behavior patterns, the platform can forecast future actions and preferences. This predictive power enables advertisers to stay ahead of trends and tailor their campaigns accordingly.
As Ariel mentioned, "Predictive analytics gives us a glimpse into what users might be interested in next. This foresight is invaluable for creating timely and relevant ad campaigns that capture user interest before it peaks."
Optimized Platforms Deliver to Taboola’s Vision
In our discussion. Ariel highlighted the collaboration between Taboola and Solidigm, focusing on how their combined efforts are enhancing data management and AI capabilities. Ariel mentioned that Solidigm’s advanced data storage solutions play a crucial role in supporting Taboola's AI infrastructure. He noted, "Solidigm's innovations in data storage technology have allowed us to manage and process vast amounts of data more efficiently, which is essential for our AI-driven ad placement systems."
Ariel further explained that the high-performance and reliability of Solidigm's storage solutions ensure that Taboola's AI models can access and analyze data in real-time, leading to more accurate and timely ad placements. "With Solidigm, we're able to scale our operations and maintain high performance even as our data needs grow," he added. This partnership exemplifies how cutting-edge storage technology can support the demanding requirements of modern AI applications, enabling more effective and personalized advertising strategies.
Challenges and Ethical Considerations
Despite the collective advancements Taboola has made to its platform, Ariel acknowledged the challenges and ethical considerations involved in using AI and data for ad placement. Privacy concerns and data security are paramount, and Taboola is committed to maintaining high standards in these areas. "We are constantly evolving our practices to ensure user data is handled with the utmost care and transparency," he emphasized.
Looking ahead, Taboola plans to further integrate AI capabilities to refine ad placements. The company is exploring the use of deep learning models to enhance content recommendations and improve ad targeting accuracy. Ariel shared, "Our goal is to push the boundaries of what's possible with AI, making our ad placements smarter and more intuitive." I, for one, cannot wait to see what is on the horizon as Taboola continues to spearhead innovation in this important arena for marketers.
For more insights from this episode, you can listen to the full conversation on TechArena's podcast.
About Solidigm:
Data storage requirements are evolving rapidly with the explosion of the AI era, and it's important to find the right partner that can provide the flexibility and breadth for each specific AI application. Solidigm is a global leader in innovative NAND flash storage solutions with a comprehensive portfolio of SSD products based on SLC, TLC, and QLC technologies. Headquartered in Rancho Cordova, California, Solidigm operates as a standalone U.S. subsidiary of SK hynix with offices across 13 locations worldwide.

TechArena's Allyson Klein and Jeniece Wnorowski from Solidigm sit down with Kelley Mullick, Vice President Technology Advancement and Alliances, from Iceotope to discuss the latest in data center cooling technology. They dive into the role of liquid cooling in supporting AI workloads, the sustainability benefits of advanced cooling solutions, and the future of edge computing.

In our latest TechArena Data Insights interview, Jeniece Wnorowski and I had the pleasure of chatting with Doug Emby, Vice President of Cheetah RAID. We delved into the fascinating world of cutting-edge storage solutions tailored for edge environments, which are crucial for industries such as entertainment, defense, and autonomous vehicles.
Doug shared insights about the remarkable Cheetah RAID Raptor and Prowler servers. These servers are designed to handle the rigorous demands of media and entertainment as well as military applications. The Raptor 2U server, in particular, is a powerhouse with its high storage capacity, robust performance, and rugged design. It boasts up to 737.28TB (using Solidigm D5-P5336 61.44 SSD) of storage capacity, hot-swappable NVMe canisters, and support for PCIe Gen4, ensuring rapid data transfer and reliable performance even in extreme conditions. You can imagine how these hot-swappable canisters could be used to quickly capture data in rugged environments and transport it swiftly at the end of a shoot day or in a migrating environment. Given the reliability of SSD technology, this use case works effortlessly without significant risk of data loss or drive degradation.
How much data can these solutions handle? A key part of our discussion was about the importance of scalable storage solutions for managing vast amounts of data at the edge. The Solidigm D5-P5336 SSD, which is integrated into Cheetah RAID’s systems, stood out for its high capacity and performance. This SSD is optimized for data-intensive workloads, including AI-driven data lakes, big data analytics, and scale-out NAS, providing efficient and rapid storage and retrieval of extensive datasets. Jeniece shared some insights on how Solidigm’s SSD technology is integral to these advancements. She highlighted features such as enhanced power loss data protection, hardware encryption, and temperature monitoring, all of which are essential for maintaining data integrity and performance in various edge applications.
One of the most compelling parts of our conversation was understanding the synergistic innovation driven between the two companies. Their collaboration has resulted in powerful and efficient storage solutions that are being relied on by IT leaders across industries. These servers ensure that critical data is stored securely and accessed quickly when needed, making a significant impact in real-world scenarios. Cheetah RAID’s high-performance server/storage, featuring hot-swap drive canisters and Solidigm’s 61.44TB drives, makes a powerful statement unmatched by others in the market.
For those interested in the technical details and practical applications of these innovations, the full interview provides a wealth of information on the future of scalable storage solutions at the edge. To dive deeper into our discussion, you can visit the TechArena interview here and learn more about Cheetah RAID's innovative products on their official page here.

TechArena host Allyson Klein and Solidigm’s Jeniece Wnorowski chat with Cheetah RAID VP Doug Emby about the innovative solutions his company is delivering to edge environments across a wide swath of applications from the entertainment industry to defense, and how innovative SSD designs from Solidigm help provide a foundation for storage performance and efficiency.

TechArena host Allyson Klein and Solidigm’s Jeniece Wnorowski chat with Taboola Vice President of Information Technology and Cyber, Ariel Pisetzky, about how his company is reshaping the marketing landscape with AI infused customer engagement tools.

Supermicro has been a player in the tech industry for over 30 years, focusing on building breakthrough solutions for data center compute requirements. Their history as a nimble infrastructure supplier has driven them ahead as a leader in AI era compute delivery. This is why I was so excited to invite Supermicro’s Paul McLeod to the TechArena Data Insights podcast sponsored by Solidigm. My co-host Jeniece Wnorowski and I put Paul through his paces to discuss Supermicro’s perspective on AI era computing, what customers are demanding of infrastructure, and how the data pipeline is a central innovation focus for today’s deployment targets.
The changing landscape of data management in AI workloads
Paul started by discussing the history of data management across data center environments stating that traditionally, IT has involved infrastructure silos for specific storage needs, with limited data accessibility across storage solutions. Paul added that with AI this is changing. AI demands all these data types and pipeline workloads to function simultaneously. Supermicro utilizes this evolving requirement to deliver value to customers. Paul pointed out that Supermicro’s heritage includes early use of NVMe technology, giving them valuable experience in storage solutions for AI.
This has shaped by a flattening of the traditional tiered storage model. Previously, cold tiers existed for data that rarely needed to be accessed. However, with AI, fast access to almost all data has become critical meaning that the cold tier is heating up into warm tier storage where flash alternatives shine. For this transition, Supermicro's solutions have featured Solidigm’s D5P5430 SSDs. These SSDs were designed to solve the unique challenges of data center enviroments, including delivery of high-density storage high performance storage drives needed for AI training. The P5430, their premier QLC-based offering, is available in various form factors to accommodate different server designs and thermal requirements and delivering impressive capacity, reaching up to 30 terabytes. Paul noted that the technology was dialed in for Supermicro’s requirements highlighting that bottlenecks have shifted from storage to compute and network. This is made even better with key collaborations with storage partners taking advantage of underlying infrastructure to fuel even the most grueling customer requirements.
Looking ahead: The future of data storage for AI workloads
Where does the market take platform innovation next? Paul pointed out the need for continued innovation of the data pipeline to reach additional scale in performance, compute density and efficiency. As large language models scale and customers demand more compute to train algorithms, keeping the data pipeline in balance and fed with rely on continued industry collaborations with partners like VAST Data and Solidigm. Be sure to visit Supermicro and Solidigm’s websites for more information about storage and compute solutions for the AI era, and continue following the TechArena as we explore data insights.

TechArena host Allyson Klein and Solidigm’s Jeniece Wnorowski chat with Weka’s Joel Kaufman, as he tours the Weka data platform and how the company’s innovation provides sustainable data management that scales for the AI era.

TechArena host Allyson Klein is joined by Solidigm’s Jeniece Wnorowski as they continue to explore rapid data innovation fueling today’s computing. In today’s episode, they chat with VAST Data’s Global VP of Engineering, Subramanian Kartik, as he describes how his team has delivered a breakthrough data platform for the AI Era.

I recently attended NVIDIA GTC, called by some as the Woodstock moment of the AI Era, and I’m still unpacking what we learned there about industry innovation to fuel AI workloads. While the TechArena packed as many conversations possible with industry innovators at the event, one conversation that stood above the rest was our interview with CoreWeave’s Jacob Yundt. He leads infrastructure buildout for CoreWeave as they chart a trajectory for delivering unparalleled scale for AI training in the cloud.
How did they do it? As we have seen at many inflection points, CoreWeave took advantage of not being encumbered by legacy to deliver a cloud stack that was specially built for AI training clusters from initial provisioning to health checks, orchestration and scheduling. This enables the company to bring up a staggering amount of GPUs to a particular training task at warp speed while providing reliable compute throughout the training period. CoreWeave provides proactive oversight of its instances to ensure that precious training cycles are not disrupted based on potential hardware failures, I/O issues, or other maladies that confront data center infrastructure.
CoreWeave has developed a cult-like following amongst AI startups looking to train algorithms where speed to train often is the difference for market opportunity. Jacob clarified their market focus on any customer looking to do “ground-breaking work at incredible scale”, and this speaks to the type of underlying infrastructure requirements they have across compute, storage, and network. And the demand for this infrastructure is stark. CoreWeave has been on record stating that power demand alone from its training clusters may stress local power grids in the communities where it operates, and the demand for CoreWeave is also growing exponentially. Valued at $7B last December, the latest discussion of valuation of the company four months later has surged to $16B underscoring the growth potential for AI training.
So what infrastructure is CoreWeave tapping to deliver their AI service? It’s no secret that their training relies on NVIDIA GPUs, and CoreWeave will be integrating next generation Blackwell GPUs into clusters utilizing liquid cooling technologies. But Jacob stressed that there’s more than GPUs that goes into the groundbreaking scale they’ve been able to achieve. That scale starts with re-imagining the data pipeline, and CoreWeave has leaned into a strategic partnership with VAST Data to deliver innovative data management and control that scales with GPU performance needs. VAST Data’s platform has driven new capabilities for managing data sets to bring data more efficiently and quickly to the processing complex eliminating much of the overhead associated with traditional tiered storage solutions.
Jacob stated that the collaboration with VAST Data begins with his team’s love of QLC storage and the careful balance between performance, capacity and efficiency that QLC delivers. To say that Jacob is a fan of QLC is an understatement, and it’s no surprise given QLC’s advantages over TLC technology in delivering increased data density per cell. Jacob stated that his long-standing collaboration with Solidigm has ensured QLC deployment in his data centers with a partnership that extends beyond procurement to account and engineering support. When you consider the size of LLMs being trained at CoreWeave, it’s easy to guess that that’s a lot of QLC NAND being deployed.
So what’s next for CoreWeave? Watch this space to learn more about their continued infrastructure buildout as a harbinger of broader AI market adoption. I’m also interested to see if CoreWeave can make a dent in the cloud service provider landscape with their built for AI training stack. I’ll also be reporting on advances of the data pipeline infrastructure industry including in my Data Insights series with Solidigm.

TechArena hosts Allyson Klein and Jeniece Wnorowski chat with CoreWeave’s Jacob Yundt about how his organization is delivering a scalable data pipeline to AI customers utilizing breakthrough VAST Data solutions featuring Solidigm QLC SSDs.

TechArena kicks off a Data Insights Series in collaboration with Solidigm, and TechArena host welcomes co-host Jeniece Wronowski and Solidigm data center marketing director Ace Stryker to the program to talk about data in the AI era, the series objectives, and how SSD innovation sits at the foundation of a new data pipeline.

TechArena host Allyson Klein chats with Solidigm’s Roger Corell and Tahmid Rahman at the OCP Summit about their company’s heritage in the storage arena and how their SSD portfolio delivers the performance and efficiency required for the AI era.