Intel Showcases IT Strengths with Sapphire Rapids Delivery
Today, Intel delivered the 4th generation of Intel Scalable processors to the world complete with a 52 SKU lineup from bronze to new Max Series solutions. With their arrival comes a new battle for data center deployments as Intel seeks to compete more effectively with AMD’s 4th generation EPYC processors. So what is the landscape for data center compute heading into 2023, and how should we view these new Xeon Scalables vs. AMD alternatives? Most importantly, what will enterprise customers choose? Let’s break it down:
Intel takes acceleration to the Max
With today’s launch, Intel doubled down on a message of workload acceleration as it’s both the breadth and type of workload accelerators that give Intel its competitive edge. The new generation of Xeons feature new AI acceleration with the introduction of Advanced Matrix Extensions expanding on the vector processing acceleration of its previous generations and competitive CPU offerings. But Intel has done more here. They’ve re-structured their positioning of embedded accelerators introducing a Data Streaming Accelerator, an In-Memory Analytics Accelerator, and an expansion of Advanced Vector Instructions for VRAN implementations. Imbued with this slew of accelerators Intel is making it very clear that they intend to drive optimized performance across workload classes with unique optimizations from AI and analytics to network functions and security. While the launch notably leaned into gen over gen performance comparisons avoiding competitive bakeoffs, we should expect to see how these accelerators compete with the brute performance of AMD’s EPYC processors in the days ahead. I’d also expect to see competitors trying to dismantle the embedded accelerator approach as costly overhead compared to more nimble designs as we heard earlier from Ampere on the TechArena.
A little help from their friends
Intel expanded on workload acceleration by leaning into the depth of industry and stack expertise at their command including a decades-long history of optimizing workloads to run best on their processors. It’s no surprise that the launch featured cloud stalwarts including AWS, Google Cloud and Microsoft Azure, but there was also a reminder that Intel has invested heavily in network with inclusion of Ericsson and Telefonica and a surprising highlight on NVIDIA while they launched their Max GPUs, potentially as a circling the wagons response to AMD’s recent MI300 announcement.
The TechArena Take
While it would be easy to conclude that though Sapphire Rapids delivers breakthrough capability compared to previous generation it still does not deliver the max performance of EPYC. I, for one, am guilty of focusing on top bin performance headlines. However, we must remember that most customers purchase mid-range CPUs and select processors for myriad reasons including full stack tuning and platform trust. When I think about this battle of the CPU titans my mind drifts to the automotive industry…where similar comparisons are made between car engines and zero to sixty times. There is a small sub-set of enthusiast drivers who purchase based on speed off the line, but most buyers are looking at brand trust, cockpit experience, and other factors to fuel their purchase decision. In 2023, Intel is showcasing its strength of intimate knowledge of what customers care about, and it would be wise to not underestimate the value of this knowledge in the marketplace. I can’t wait to pop some popcorn and see how this plays out. As always, thanks for engaging - Allyson
Today, Intel delivered the 4th generation of Intel Scalable processors to the world complete with a 52 SKU lineup from bronze to new Max Series solutions. With their arrival comes a new battle for data center deployments as Intel seeks to compete more effectively with AMD’s 4th generation EPYC processors. So what is the landscape for data center compute heading into 2023, and how should we view these new Xeon Scalables vs. AMD alternatives? Most importantly, what will enterprise customers choose? Let’s break it down:
Intel takes acceleration to the Max
With today’s launch, Intel doubled down on a message of workload acceleration as it’s both the breadth and type of workload accelerators that give Intel its competitive edge. The new generation of Xeons feature new AI acceleration with the introduction of Advanced Matrix Extensions expanding on the vector processing acceleration of its previous generations and competitive CPU offerings. But Intel has done more here. They’ve re-structured their positioning of embedded accelerators introducing a Data Streaming Accelerator, an In-Memory Analytics Accelerator, and an expansion of Advanced Vector Instructions for VRAN implementations. Imbued with this slew of accelerators Intel is making it very clear that they intend to drive optimized performance across workload classes with unique optimizations from AI and analytics to network functions and security. While the launch notably leaned into gen over gen performance comparisons avoiding competitive bakeoffs, we should expect to see how these accelerators compete with the brute performance of AMD’s EPYC processors in the days ahead. I’d also expect to see competitors trying to dismantle the embedded accelerator approach as costly overhead compared to more nimble designs as we heard earlier from Ampere on the TechArena.
A little help from their friends
Intel expanded on workload acceleration by leaning into the depth of industry and stack expertise at their command including a decades-long history of optimizing workloads to run best on their processors. It’s no surprise that the launch featured cloud stalwarts including AWS, Google Cloud and Microsoft Azure, but there was also a reminder that Intel has invested heavily in network with inclusion of Ericsson and Telefonica and a surprising highlight on NVIDIA while they launched their Max GPUs, potentially as a circling the wagons response to AMD’s recent MI300 announcement.
The TechArena Take
While it would be easy to conclude that though Sapphire Rapids delivers breakthrough capability compared to previous generation it still does not deliver the max performance of EPYC. I, for one, am guilty of focusing on top bin performance headlines. However, we must remember that most customers purchase mid-range CPUs and select processors for myriad reasons including full stack tuning and platform trust. When I think about this battle of the CPU titans my mind drifts to the automotive industry…where similar comparisons are made between car engines and zero to sixty times. There is a small sub-set of enthusiast drivers who purchase based on speed off the line, but most buyers are looking at brand trust, cockpit experience, and other factors to fuel their purchase decision. In 2023, Intel is showcasing its strength of intimate knowledge of what customers care about, and it would be wise to not underestimate the value of this knowledge in the marketplace. I can’t wait to pop some popcorn and see how this plays out. As always, thanks for engaging - Allyson