X

Has GenAI Hit the Chasm? Unpacking Microsoft's News

February 24, 2025

Over the weekend, Satya Nadella announced that the economic opportunity from Gen AI is not manifesting as expected. In true AI-era style, Microsoft followed up at warp speed, announcing the cancellation of hundreds of megawatts of future compute capacity by terminating leases for planned AI data center expansion. Speculation on the street is that this is reflective of their OpenAI compute commitments and a rapidly evolving market. This update will likely send shockwaves across the infrastructure community this week as we all grapple with what this will mean for broader infrastructure demand, what it will represent in terms of enterprise adoption curves of generative AI, and how AI innovations will be fueled moving forward. The news reminded me of Geoffrey Moore, Clayton Christensen, and an insightful conversation I had last week. Let's unpack.  

What is driving this change? Satya signaled that this is about the forecasted economic return for generative AI. But what changed in the forecasts from the dizzying expectations of... three months ago? To answer this question, I posit two possible explanations: either the enterprise demand that large model providers were seeking has not materialized, or the introduction of new, efficient models has re-shaped the GenAI cost curve.  

We all were stunned by the introduction of DeepSeek and the efficiency delivered by the model vs OpenAI. While we discussed the tech approach, one thing that was maybe not discussed as much is how economically DeepSeek is a classic Christensenian disruptive innovation from an economic standpoint, delivering a million tokens for somewhere between $0.14 and $0.55, depending on whether you're seeking new or previously used input. This compares to a cost of $15-60 with ChatGPT prior to DeepSeek introduction, or $7.50 today... for similar performance. While this is a boon for anyone seeking an affordable model, it's also a sign that OpenAI's revenue forecasts just hit an iceberg, and future revenue forecast for new infrastructure may not materialize. The generative AI model market is complex and innovating quickly, and this is an oversimplified snapshot of but two models in a vast sea of alternatives. We will be unpacking some of these diverse models in the days ahead, and why they are needed, but for today, one is left to wonder if this is an OpenAI problem or a broader generative AI problem. Microsoft signaled the latter, which takes us to unpacking enterprise demand.

Generative AI may have reached its chasm moment. I am, of course, referring to Geoffrey Moore's foundational principle of disruptive innovation. Time and again, technology is billed in heady brilliance for the change it will bring, we reach the chasm where people lose faith in the vision, and we climb up the other side (most of the time) with practical adoption. This can explain where we have been with technologies like virtualization and cloud computing, and can explain where we currently are with 5G, which we will be covering next week at MWC, as well as the change we are seeing in real time with Gen AI.  

The truth is Gen AI is a strikingly impactful tool, but at least in the near term, its impact is irregularly felt across job functions. Marketing teams and customer service departments are rapidly evolving today with the power of this technology; operating theaters and factory floors have already felt AI's impact for years from previous iterations of technology. Think image recognition, natural language processing, recommendation engines. These technologies feel like old hat to us today, but are, indeed, enjoying massive deployment as accepted tech.  

This phenomenon played out in conversations I had last week at Cloud Field Day, as it became very apparent that the value of Gen AI integration into cloud monitoring tools was seen as anemic due to its focus on admin console chat improvement. Practitioner delegates wanted deeper integration into actual monitoring—leveraging traditional ML—while vendors seemed hard-pressed to justify valuations in the Gen AI hype cycle. The GenAI chasm was palpable in that room as an example of unequal value proposition across job functions.

So, where does this leave us? It’s time to unpack practical application targets, explore ethical considerations, trust and safety, and examine Gen AI's role in text, code generation, and agentic workloads. Microsoft did confirm that its $80 billion in infrastructure spending this year remains on track, meaning that at least for now, data center spend has slowed in forecasts but not cratered.  While we had tempered enthusiasm about the broad scale impact of GenAI based on its core value limiting job function integration, we still see mass disruption in knowledge-based work from the application of GenAI tools as powerful accelerators. It's only February! Buckle up. 2025 is going to be a fascinating year in tech.

Over the weekend, Satya Nadella announced that the economic opportunity from Gen AI is not manifesting as expected. In true AI-era style, Microsoft followed up at warp speed, announcing the cancellation of hundreds of megawatts of future compute capacity by terminating leases for planned AI data center expansion. Speculation on the street is that this is reflective of their OpenAI compute commitments and a rapidly evolving market. This update will likely send shockwaves across the infrastructure community this week as we all grapple with what this will mean for broader infrastructure demand, what it will represent in terms of enterprise adoption curves of generative AI, and how AI innovations will be fueled moving forward. The news reminded me of Geoffrey Moore, Clayton Christensen, and an insightful conversation I had last week. Let's unpack.  

What is driving this change? Satya signaled that this is about the forecasted economic return for generative AI. But what changed in the forecasts from the dizzying expectations of... three months ago? To answer this question, I posit two possible explanations: either the enterprise demand that large model providers were seeking has not materialized, or the introduction of new, efficient models has re-shaped the GenAI cost curve.  

We all were stunned by the introduction of DeepSeek and the efficiency delivered by the model vs OpenAI. While we discussed the tech approach, one thing that was maybe not discussed as much is how economically DeepSeek is a classic Christensenian disruptive innovation from an economic standpoint, delivering a million tokens for somewhere between $0.14 and $0.55, depending on whether you're seeking new or previously used input. This compares to a cost of $15-60 with ChatGPT prior to DeepSeek introduction, or $7.50 today... for similar performance. While this is a boon for anyone seeking an affordable model, it's also a sign that OpenAI's revenue forecasts just hit an iceberg, and future revenue forecast for new infrastructure may not materialize. The generative AI model market is complex and innovating quickly, and this is an oversimplified snapshot of but two models in a vast sea of alternatives. We will be unpacking some of these diverse models in the days ahead, and why they are needed, but for today, one is left to wonder if this is an OpenAI problem or a broader generative AI problem. Microsoft signaled the latter, which takes us to unpacking enterprise demand.

Generative AI may have reached its chasm moment. I am, of course, referring to Geoffrey Moore's foundational principle of disruptive innovation. Time and again, technology is billed in heady brilliance for the change it will bring, we reach the chasm where people lose faith in the vision, and we climb up the other side (most of the time) with practical adoption. This can explain where we have been with technologies like virtualization and cloud computing, and can explain where we currently are with 5G, which we will be covering next week at MWC, as well as the change we are seeing in real time with Gen AI.  

The truth is Gen AI is a strikingly impactful tool, but at least in the near term, its impact is irregularly felt across job functions. Marketing teams and customer service departments are rapidly evolving today with the power of this technology; operating theaters and factory floors have already felt AI's impact for years from previous iterations of technology. Think image recognition, natural language processing, recommendation engines. These technologies feel like old hat to us today, but are, indeed, enjoying massive deployment as accepted tech.  

This phenomenon played out in conversations I had last week at Cloud Field Day, as it became very apparent that the value of Gen AI integration into cloud monitoring tools was seen as anemic due to its focus on admin console chat improvement. Practitioner delegates wanted deeper integration into actual monitoring—leveraging traditional ML—while vendors seemed hard-pressed to justify valuations in the Gen AI hype cycle. The GenAI chasm was palpable in that room as an example of unequal value proposition across job functions.

So, where does this leave us? It’s time to unpack practical application targets, explore ethical considerations, trust and safety, and examine Gen AI's role in text, code generation, and agentic workloads. Microsoft did confirm that its $80 billion in infrastructure spending this year remains on track, meaning that at least for now, data center spend has slowed in forecasts but not cratered.  While we had tempered enthusiasm about the broad scale impact of GenAI based on its core value limiting job function integration, we still see mass disruption in knowledge-based work from the application of GenAI tools as powerful accelerators. It's only February! Buckle up. 2025 is going to be a fascinating year in tech.

Subscribe to TechArena

Subscribe