X

A Mutiny of AI Silicon Innovation at AI HW and Edge Summit

September 13, 2023

Last week, I was delighted to chat with Nidhi Chappell, Microsoft’s GM behind delivery of AI supercomputers, on her team’s journey in architecting the infrastructure that has unleashed Chat GPT to the world. Imagine for a minute the awesome opportunity and responsibility in delivering the compute capability to unlock generative AI for the world. Imagine first seeing Chat GPT before everyone else and knowing you’ll be sharing it with the world on the supercomputer you’re designing and deploying very soon. This is what Nidhi and her team at Microsoft have delivered. What Nidhi shared about this experience was also insightful regarding the broader moment we’re standing in right now. As she described leveraging her background in high performance computing to build AI training supercomputers, one thing she said stood out to me. Her team is utilizing hundreds of thousands of GPUs to train their generative AI solution that will likely transform every industry on the planet, re-architect how we look at work and societal constructs, and potentially reshape how governments address the great divide between those who thrive in the era of AI and those who struggle to survive. Those GPUs run in buildings that consume more power than entire towns, and are as rare to find as Willy Wonka’s golden tickets. We’re at a moment where the most powerful innovation that humans arguably have ever created is resting in the hands of one company…NVIDIA…to supply a few companies, including, of course, Microsoft, racing with Google, Amazon, and a few others, to control the power of AI engines for the world. 

Yesterday’s publication of Elon Musk’s biography delivers a stark warning about the consolidation of power to one individual or company and what it means to society at large. I think even leadership within the industry stalwarts that can afford building massive AI compute engines would argue that a thriving AI industry vibrant with competition and diffuse in power is good for all of us. Nidhi talked at length about Microsoft’s own vision of responsible AI development and delivery aligned with AI serving as a co-pilot to human innovation aligning well with the company’s broader reputation for responsible use of technology. But how do we get to this vibrant future? To find out, I traveled to Santa Clara to the AI HW and Edge Summit, featuring a who’s who of AI silicon innovators. This event offers the perfect opportunity to assess the state of innovation of AI silicon. The lineup of speakers, sponsors, and attendees confirmed that the broader industry is hungry for alternatives for data center and edge AI implementation.  

What is driving this hunger? GPU supply constraints have been well documented, but other influences, such as power consumption in barriers to developer engagement, are creating demand for alternative architectures. Today's data centers consume over 3% of the world’s electricity, and some forecasts predict that AI will scale compute energy demand to upwards of 20% of global consumption. GPUs are literally becoming the Hummers of the data center before our eyes. Can we afford to rely on this power-hungry architecture at a time when the world’s climate crisis is peaking? Will this reliance on massive electricity further consolidate control of AI to those who can not only afford the infrastructure cost, but the utility bill? 

Organizations are also challenged in finding AI talent with true knowledge of training algorithm and framework development. How do we ensure that AI can be democratized, and does architectural choice including custom silicon and chiplet innovation aid in broadening the opportunity for AI innovation? 

This week, I’ll be exploring these questions with three companies challenging the AI power dynamics, including Tenstorrent, the Jim Keller led startup delivering highly scalable RISC-V + ML accelerator solutions, Lemurian Labs, a company founded in robotics but delivering new acceleration solutions for AI from data center to edge, and Alphawave Semi, a semiconductor IP engine rooted in the connectivity and chiplet solutions required for AI innovation. Collectively, I expect they’ll provide insight on the state of broad industry delivery of compelling alternatives to GPUs. Watch this space for insights from the conference, and, as always, thanks for engaging. - Allyson 


Last week, I was delighted to chat with Nidhi Chappell, Microsoft’s GM behind delivery of AI supercomputers, on her team’s journey in architecting the infrastructure that has unleashed Chat GPT to the world. Imagine for a minute the awesome opportunity and responsibility in delivering the compute capability to unlock generative AI for the world. Imagine first seeing Chat GPT before everyone else and knowing you’ll be sharing it with the world on the supercomputer you’re designing and deploying very soon. This is what Nidhi and her team at Microsoft have delivered. What Nidhi shared about this experience was also insightful regarding the broader moment we’re standing in right now. As she described leveraging her background in high performance computing to build AI training supercomputers, one thing she said stood out to me. Her team is utilizing hundreds of thousands of GPUs to train their generative AI solution that will likely transform every industry on the planet, re-architect how we look at work and societal constructs, and potentially reshape how governments address the great divide between those who thrive in the era of AI and those who struggle to survive. Those GPUs run in buildings that consume more power than entire towns, and are as rare to find as Willy Wonka’s golden tickets. We’re at a moment where the most powerful innovation that humans arguably have ever created is resting in the hands of one company…NVIDIA…to supply a few companies, including, of course, Microsoft, racing with Google, Amazon, and a few others, to control the power of AI engines for the world. 

Yesterday’s publication of Elon Musk’s biography delivers a stark warning about the consolidation of power to one individual or company and what it means to society at large. I think even leadership within the industry stalwarts that can afford building massive AI compute engines would argue that a thriving AI industry vibrant with competition and diffuse in power is good for all of us. Nidhi talked at length about Microsoft’s own vision of responsible AI development and delivery aligned with AI serving as a co-pilot to human innovation aligning well with the company’s broader reputation for responsible use of technology. But how do we get to this vibrant future? To find out, I traveled to Santa Clara to the AI HW and Edge Summit, featuring a who’s who of AI silicon innovators. This event offers the perfect opportunity to assess the state of innovation of AI silicon. The lineup of speakers, sponsors, and attendees confirmed that the broader industry is hungry for alternatives for data center and edge AI implementation.  

What is driving this hunger? GPU supply constraints have been well documented, but other influences, such as power consumption in barriers to developer engagement, are creating demand for alternative architectures. Today's data centers consume over 3% of the world’s electricity, and some forecasts predict that AI will scale compute energy demand to upwards of 20% of global consumption. GPUs are literally becoming the Hummers of the data center before our eyes. Can we afford to rely on this power-hungry architecture at a time when the world’s climate crisis is peaking? Will this reliance on massive electricity further consolidate control of AI to those who can not only afford the infrastructure cost, but the utility bill? 

Organizations are also challenged in finding AI talent with true knowledge of training algorithm and framework development. How do we ensure that AI can be democratized, and does architectural choice including custom silicon and chiplet innovation aid in broadening the opportunity for AI innovation? 

This week, I’ll be exploring these questions with three companies challenging the AI power dynamics, including Tenstorrent, the Jim Keller led startup delivering highly scalable RISC-V + ML accelerator solutions, Lemurian Labs, a company founded in robotics but delivering new acceleration solutions for AI from data center to edge, and Alphawave Semi, a semiconductor IP engine rooted in the connectivity and chiplet solutions required for AI innovation. Collectively, I expect they’ll provide insight on the state of broad industry delivery of compelling alternatives to GPUs. Watch this space for insights from the conference, and, as always, thanks for engaging. - Allyson 


Subscribe to TechArena

Subscribe