X

MemCon 2024: Memory Innovation Sits at the Heart of AI Delivery

March 4, 2024

When you consider the massive investment in industry innovation based on AI, it doesn’t take too long to realize that cutting edge AI models are being constrained by their underlying infrastructure. While processor and accelerator advancements garner the lion’s share of the headlines, a key bottleneck to consider is the memory hierarchy. The faster data can be delivered for AI training, the faster a new large language model can be delivering inference to fuel a new workload. While servers continue to scale the amount of standard DRAM capacity - notably AMD’s industry leading 6TB of memory capacity across a dozen channels - the leading edge is seeking lower latency and more alternatives than standard configurations. In fact, cloud service providers have pointed to memory as one of the key infrastructure gaps facing AI training moving forward.

This industry challenge is why MemCon is a must attend event on my 2024 roadmap and one that the TechArena is delighted to sponsor. Pulling together some of the leading data center operators and the leading edge of memory innovation, MemCon features two days of discussions on the requirements for memory, the latest memory innovations, and how the industry should work together to bridge the gap for the insatiable demand represented by AI workloads. Highlights of this year’s events includes an opening keynote from Microsoft’s Zaid Kahn who also spoke at last year’s event. Since this time Zaid has become the chair of the Open Compute Project and continued his vocal evangelism for infrastructure innovation including in the memory arena. Joining Zaid from the operator perspective include sessions from Netflix, EY, Oracle, Shell, Roche, Berkeley Research Lab and Los Alamos National Labs. They’ll be joined by speakers from across the industry and from the industry consortiums shaping the standards that will fuel future memory innovation and collectively highlight the intersect between high performance computing and AI cluster development as well as broader scale opportunities with increased memory capacity, tiered memory with CXL, and more.

Whether you’re in the memory arena, run a data center and feel the pain of memory bound workloads, or are delivering platforms to the market and need to keep pace with the latest silicon advancements, prioritize your schedule to attend MemCon this year. As a media sponsor, we’re happy to deliver a registration discount code TECHARENA15 for 15% off registration. And if you’re going to be at the show, please reach out to meet up or even be on the TechArena podcast.

When you consider the massive investment in industry innovation based on AI, it doesn’t take too long to realize that cutting edge AI models are being constrained by their underlying infrastructure. While processor and accelerator advancements garner the lion’s share of the headlines, a key bottleneck to consider is the memory hierarchy. The faster data can be delivered for AI training, the faster a new large language model can be delivering inference to fuel a new workload. While servers continue to scale the amount of standard DRAM capacity - notably AMD’s industry leading 6TB of memory capacity across a dozen channels - the leading edge is seeking lower latency and more alternatives than standard configurations. In fact, cloud service providers have pointed to memory as one of the key infrastructure gaps facing AI training moving forward.

This industry challenge is why MemCon is a must attend event on my 2024 roadmap and one that the TechArena is delighted to sponsor. Pulling together some of the leading data center operators and the leading edge of memory innovation, MemCon features two days of discussions on the requirements for memory, the latest memory innovations, and how the industry should work together to bridge the gap for the insatiable demand represented by AI workloads. Highlights of this year’s events includes an opening keynote from Microsoft’s Zaid Kahn who also spoke at last year’s event. Since this time Zaid has become the chair of the Open Compute Project and continued his vocal evangelism for infrastructure innovation including in the memory arena. Joining Zaid from the operator perspective include sessions from Netflix, EY, Oracle, Shell, Roche, Berkeley Research Lab and Los Alamos National Labs. They’ll be joined by speakers from across the industry and from the industry consortiums shaping the standards that will fuel future memory innovation and collectively highlight the intersect between high performance computing and AI cluster development as well as broader scale opportunities with increased memory capacity, tiered memory with CXL, and more.

Whether you’re in the memory arena, run a data center and feel the pain of memory bound workloads, or are delivering platforms to the market and need to keep pace with the latest silicon advancements, prioritize your schedule to attend MemCon this year. As a media sponsor, we’re happy to deliver a registration discount code TECHARENA15 for 15% off registration. And if you’re going to be at the show, please reach out to meet up or even be on the TechArena podcast.

Subscribe to TechArena

Subscribe