X

MemryX Making a Case for AI Acceleration at the Edge

February 23, 2024

Yesterday at AIFD4, I wrote about Intel’s latest musings on right sized silicon for AI inference and how the company positions Xeon processors vs. GPU alternatives. This was the latest of many publications on this platform regarding the renaissance of silicon innovation in the AI arena, and one that I expect will be a feature throughout 2024 and beyond. One company that I’ve wanted to talk to in this space is MemryX, a startup founded at the University of Michigan and positioned squarely in delivering AI acceleration of visual processing at the edge.

Why is this important? Once again, we come back to data gravity. While massive AI clusters in the cloud may be the heavyweight champs of AI, the movement of data for AI inference is oftentimes impractical, expensive or downright unfeasible given latency considerations. Instead solutions require fast inference at the edge which can be delivered within the constraints of a vast landscape of edge environments. This calls for solutions that offer performance and watt sipping energy consumption.

MemryX has designed a solution that fits that bill. I caught up with VP of Product and Business Development Roger Peene, and he described a solution that delivered up to 100X more efficient than a CPU and 10X more efficient per watt than a GPU. What makes me impressed with MemryX is that they know what they want to be! They are squarely focused on a high in demand workload in the rapidly advancing edge infrastructure market. And unlike many startups in this space who talk a great PowerPoint game, they have actual silicon! In fact, they received their first silicon late last year and have it up and running in labs today, and with customers in the near future. I’m delighted to see the move to real product delivery in a space so desperate for solution alternatives to fuel customer demand, and I can’t wait to hear more from Roger and the team as solutions reach expected customer trials and deployments in the months ahead.

Yesterday at AIFD4, I wrote about Intel’s latest musings on right sized silicon for AI inference and how the company positions Xeon processors vs. GPU alternatives. This was the latest of many publications on this platform regarding the renaissance of silicon innovation in the AI arena, and one that I expect will be a feature throughout 2024 and beyond. One company that I’ve wanted to talk to in this space is MemryX, a startup founded at the University of Michigan and positioned squarely in delivering AI acceleration of visual processing at the edge.

Why is this important? Once again, we come back to data gravity. While massive AI clusters in the cloud may be the heavyweight champs of AI, the movement of data for AI inference is oftentimes impractical, expensive or downright unfeasible given latency considerations. Instead solutions require fast inference at the edge which can be delivered within the constraints of a vast landscape of edge environments. This calls for solutions that offer performance and watt sipping energy consumption.

MemryX has designed a solution that fits that bill. I caught up with VP of Product and Business Development Roger Peene, and he described a solution that delivered up to 100X more efficient than a CPU and 10X more efficient per watt than a GPU. What makes me impressed with MemryX is that they know what they want to be! They are squarely focused on a high in demand workload in the rapidly advancing edge infrastructure market. And unlike many startups in this space who talk a great PowerPoint game, they have actual silicon! In fact, they received their first silicon late last year and have it up and running in labs today, and with customers in the near future. I’m delighted to see the move to real product delivery in a space so desperate for solution alternatives to fuel customer demand, and I can’t wait to hear more from Roger and the team as solutions reach expected customer trials and deployments in the months ahead.

Subscribe to TechArena

Subscribe