X

Microsoft Stuns with Speed of AI Innovation and Integration at MS Build

May 21, 2024

MS Build has always been a fantastic conference for developer innovation within the Microsoft environment. In 2024 it has transformed into a must attend event to track AI innovation. Today, Satya Nadella and team did not disappoint as they delivered a maelstrom on new announcements to Azure AI, Co-Pilot, and more. The speed of announcements in the keynote was reflective of the speed of Microsoft innovation, and it starts with the foundational innovation of Azure infrastructure.

Satya shared a massive buildout of Azure data centers across the world from Thailand and Malaysia to Spain and Wisconsin. Microsoft announced the world’s largest supercomputing cluster in the world last fall, and Satya shared that they’ve grown this supercomputing capability by 30X in the last six months, an incredible pace of deployment reflective of the customer demand for Azure AI services.

Silicon Collaborations Fuel Azure AI Growth

They’re delivering this with tight partnerships with industry leaders along with home grown innovation starting with Microsoft silicon. This starts with their deep partnership with NVIDIA. The collaboration was discussed earlier this year at GTC and covered on the TechArena. Microsoft’s Nidhi Chappell discussed the nature of this collaboration as true co-invention in her interview on TechArena last week, and this was reflective of plans for delivery of H200 based instances later this year and expectations for Blackwell platforms among the first available cloud instances on Azure. These will be available to fuel MS365 and Co-Pilot Acceleration.

NVIDIA, however, is not the only game in town for Azure, and Satya stressed a commitment to the broadest choice of acceleration. Today, Satya announced expansion of the strategic collaboration with AMD with delivery of the industry’s first NDMI300 instances for customers. This is an enormous milestone for the two companies offering the best price performance instances for GPT 4o instances. I expect to hear more about this collaboration at the conference reflective of AI providers desire to support competitors to NVIDIA’s dominance of the AI acceleration arena.

Microsoft extends their investment in this space with their own silicon, and Satya did give a shout out to Microsoft Maia acceleration. However, more attention for home grown silicon was given to Microsoft Cobalt processors. Satya announced the public preview of Colbalt based VMs for cloud native computing. These ARM based solutions are being delivered to customers including Elastic, MongoDB, Snowflake and more and put the silicon industry on notice that while Microsoft was comparatively late to indigenous silicon development, they are not slowly exploring this space but integrating rapidly into customer services.

With this rapid development of compute capacity and capability, we need to consider Microsoft’s utility bill to power this infrastructure. Satya gave an update on his team’s goals in energy efficiency stating that Microsoft is on track to meet 100% renewable energy use across global Azure data centers by next year. He pointed to specific innovations in advanced power and cooling technologies helping Azure to meet these commitments. While this is a fantastic achievement especially given the challenge of renewable energy availability across the diverse geographical landscape that Microsoft is operating, I'd like to learn more about advancements on embedded carbon investment and true circularity given the speed of innovation investment.

Infrastructure Innovation Fuels AI Integration and Societal Transformation

So what does this buildout and innovation deliver? Satya spoke to the performance and efficiency advancements that Microsoft is delivering to customers giving an example of Chat GPT4 achieving 12X cost savings and 3X performance improvements since its launch in Q4 2022. That’s 1.5X performance gains vs. Moore’s Law in case you’re tracking.

But Chat GPT 4.0 is not the only LLM being delivered in Azure AI. Satya spoke to broad model support being tapped by over 50K organizations around the world, all grounded on the foundational partnership with OpenAI. GPT-4o the industry’s top performing model announced just last week, has already been integrated with MS Co-Pilot and in Azure AI.

Microsoft has also delivered Model as a Service (MaaS) capabilities with a handful of partners including NTT Data and expanded their ongoing opensource collaboration with Hugging Face with new capabilities for developers. Satya also claimed leadership on small language models including expansion of Phi-3. Microsoft is delivering Phi-3 vision as well as Phi-3 small, medium, and mini models– all with sizes to fit developer needs from ~ 3 billion to 12 billion parameters.

All of this capability fuels opportunity for integration across industries, and Satya briefly covered examples of customers taking advantage of this technology. A notable example of society changing integration of AI into our world is a new collaboration with the Khan Academy propelling AI’s power directly into US classrooms. Khanmigo, a Khan Academy AI tool will help support US educators to offload some of the crushing operational work for managing the classroom freeing time to for educator engagement with students. And while the capability of AI will transform industries, deliver new revenue streams and create eye opening efficiency to work, this example provides a glimpse of how transformational a time we live in. We’re excited to see more and are thrilled to see what Microsoft is delivering to help usher in this new AI Era.

MS Build has always been a fantastic conference for developer innovation within the Microsoft environment. In 2024 it has transformed into a must attend event to track AI innovation. Today, Satya Nadella and team did not disappoint as they delivered a maelstrom on new announcements to Azure AI, Co-Pilot, and more. The speed of announcements in the keynote was reflective of the speed of Microsoft innovation, and it starts with the foundational innovation of Azure infrastructure.

Satya shared a massive buildout of Azure data centers across the world from Thailand and Malaysia to Spain and Wisconsin. Microsoft announced the world’s largest supercomputing cluster in the world last fall, and Satya shared that they’ve grown this supercomputing capability by 30X in the last six months, an incredible pace of deployment reflective of the customer demand for Azure AI services.

Silicon Collaborations Fuel Azure AI Growth

They’re delivering this with tight partnerships with industry leaders along with home grown innovation starting with Microsoft silicon. This starts with their deep partnership with NVIDIA. The collaboration was discussed earlier this year at GTC and covered on the TechArena. Microsoft’s Nidhi Chappell discussed the nature of this collaboration as true co-invention in her interview on TechArena last week, and this was reflective of plans for delivery of H200 based instances later this year and expectations for Blackwell platforms among the first available cloud instances on Azure. These will be available to fuel MS365 and Co-Pilot Acceleration.

NVIDIA, however, is not the only game in town for Azure, and Satya stressed a commitment to the broadest choice of acceleration. Today, Satya announced expansion of the strategic collaboration with AMD with delivery of the industry’s first NDMI300 instances for customers. This is an enormous milestone for the two companies offering the best price performance instances for GPT 4o instances. I expect to hear more about this collaboration at the conference reflective of AI providers desire to support competitors to NVIDIA’s dominance of the AI acceleration arena.

Microsoft extends their investment in this space with their own silicon, and Satya did give a shout out to Microsoft Maia acceleration. However, more attention for home grown silicon was given to Microsoft Cobalt processors. Satya announced the public preview of Colbalt based VMs for cloud native computing. These ARM based solutions are being delivered to customers including Elastic, MongoDB, Snowflake and more and put the silicon industry on notice that while Microsoft was comparatively late to indigenous silicon development, they are not slowly exploring this space but integrating rapidly into customer services.

With this rapid development of compute capacity and capability, we need to consider Microsoft’s utility bill to power this infrastructure. Satya gave an update on his team’s goals in energy efficiency stating that Microsoft is on track to meet 100% renewable energy use across global Azure data centers by next year. He pointed to specific innovations in advanced power and cooling technologies helping Azure to meet these commitments. While this is a fantastic achievement especially given the challenge of renewable energy availability across the diverse geographical landscape that Microsoft is operating, I'd like to learn more about advancements on embedded carbon investment and true circularity given the speed of innovation investment.

Infrastructure Innovation Fuels AI Integration and Societal Transformation

So what does this buildout and innovation deliver? Satya spoke to the performance and efficiency advancements that Microsoft is delivering to customers giving an example of Chat GPT4 achieving 12X cost savings and 3X performance improvements since its launch in Q4 2022. That’s 1.5X performance gains vs. Moore’s Law in case you’re tracking.

But Chat GPT 4.0 is not the only LLM being delivered in Azure AI. Satya spoke to broad model support being tapped by over 50K organizations around the world, all grounded on the foundational partnership with OpenAI. GPT-4o the industry’s top performing model announced just last week, has already been integrated with MS Co-Pilot and in Azure AI.

Microsoft has also delivered Model as a Service (MaaS) capabilities with a handful of partners including NTT Data and expanded their ongoing opensource collaboration with Hugging Face with new capabilities for developers. Satya also claimed leadership on small language models including expansion of Phi-3. Microsoft is delivering Phi-3 vision as well as Phi-3 small, medium, and mini models– all with sizes to fit developer needs from ~ 3 billion to 12 billion parameters.

All of this capability fuels opportunity for integration across industries, and Satya briefly covered examples of customers taking advantage of this technology. A notable example of society changing integration of AI into our world is a new collaboration with the Khan Academy propelling AI’s power directly into US classrooms. Khanmigo, a Khan Academy AI tool will help support US educators to offload some of the crushing operational work for managing the classroom freeing time to for educator engagement with students. And while the capability of AI will transform industries, deliver new revenue streams and create eye opening efficiency to work, this example provides a glimpse of how transformational a time we live in. We’re excited to see more and are thrilled to see what Microsoft is delivering to help usher in this new AI Era.

Subscribe to TechArena

Subscribe