X

3 Key Takeaways from Chiplet Summit 2025

January 23, 2025

Innovators from around the globe gathered in Santa Clara this week for the third annual Chiplet Summit to learn the latest in artifical intelligence (AI)/ machine learning (ML) acceleration, the open chiplet economy, advanced packaging methods, die-to-die interfaces, and more. The TechArena team had courtside seats for the event, and with chiplets being the underpinning technology for continued advancements of Moore’s Law, we were keen to understand the state of industry innovation.

We have been talking about an open chiplet economy on the TechArena platform since we named UCIe one of the top innovations in tech in 2022. The speed of design, efficiency of multi-process solution delivery, and opportunity for best-in-breed chiplet integration offers unquestionable opportunity for the market. But while all major compute architectures embrace chiplet designs today, the open economy of chiplet design has been slow to emerge due to missing elements, such as form factor designs, and interoperability testing still needing to be addressed by the industry. Technologists attending the event this week enjoyed opportunities to learn from industry pacesetters about the current state of addressing these gaps as discussions centered on how to create the latest chiplet designs in less time, for less cost, and in a more scalable way.

The event featured a series of keynotes from semiconductor heavyweights, including  Alphawave Semi, Arm, the Open Compute Project Foundation, Synopsys, and Teradyne. While these companies provided incredible insight to the state of the industry, there were a few takeaways that rose above the rest:

  1. The industry is embracing chiplet technologies as the foundation for future AI scaling – In a keynote delivered by Tony Chan Carusone, chief technology officer at Alphawave Semi and professor of electrical and computer engineering at University of Toronto, Tony unveiled Alphawave’s vision for the future of AI scaling, connectivity, and custom silicon solutions. He discussed how groundbreaking advancements in chiplet technology are enabling the rapid evolution of AI systems while addressing critical challenges of sustainability, cost, and performance. This included three pivotal challenges driving innovation in AI: the insatiable demand for compute, the diversity of workloads requiring customized hardware, and the increasing complexity and cost of traditional monolithic chip designs. He emphasized how chiplet-based architectures are uniquely positioned to address these issues. By leveraging modular design principles and advanced packaging technologies, chiplets enable faster development cycles, greater scalability, and optimized performance tailored to AI workloads.
    Alphawave Semi showcased its AlphaChip 1600 IO, a multi-protocol chiplet that supports diverse connectivity requirements, including Ethernet and optical solutions, vital for enabling next-generation AI infrastructures.
  2. Collaboration is becoming increasingly critical across the semiconductor supply chain – Eddie Ramirez, vice president of marketing at Arm, delivered an insightful keynote outlining the pivotal role of chiplet technology and ecosystem collaboration in addressing the challenges of AI scaling and democratizing computing. With expertise spanning data centers, networking, and edge solutions, Arm is enabling partners to accelerate innovation while optimizing cost and performance, he said. We first covered Arm’s ecosystem programs in this arena in 2023 at the Open Compute Foundation Summit, and it was terrific to see the groundswell of support Arm has developed with partners reflecting a broader trend in chiplet advancement.
    In his talk, Eddie noted that 80% of AI workloads are centering on inference, a shift from a more training-centric focus in 2024, requiring specialized silicon and heterogeneous integration to meet the needs of both data center and edge environments. Chiplet-based architectures provide a solution to the rising costs and complexity of traditional monolithic designs, enabling reusable silicon and faster development cycles. Eddie highlighted examples of Arm’s ecosystem collaborations, including NVIDIA’s Grace Hopper system and Amazon’s Graviton processors. Grace Hopper showcased true heterogeneous integration, combining Arm Neoverse CPUs with NVIDIA’s high-performance GPUs to enhance AI pipeline efficiency. Meanwhile, Amazon’s accelerated delivery of four Graviton processor generations in five years highlighted how chiplets can streamline development and boost performance.
    Eddie also discussed Arm’s NeoVerse CSS compute subsystems as a catalyst for advancing custom AI silicon. By pre-integrating IP and reducing design complexity, the company is empowering partners to bring AI-enabled CPUs to market faster, he said. He pointed to the Arm Total Design initiative and open-source chiplet system architecture (CSA) specification as steps toward standardizing chiplet designs and fostering a more accessible computing ecosystem.
  3. Multi-Die architectures hold increasing significance as critical to the future of AI-driven semiconductor growth Abhijeet Chakraborty, vice president of engineering at Synopsys, highlighted the transformative role of multi-die and chiplet technologies in driving semiconductor industry growth. With global semiconductor sales projected to approach $700 billion in 2025, Abhijeet[AK1]  emphasized how advanced chip designs are addressing the surging demand for AI and high-performance computing (HPC) applications. Multi-die architectures are growing in significance as essential for overcoming the limitations of monolithic designs in AI chip development, he said. AI accelerators, tailored for training and inference workloads, require immense compute power, high memory bandwidth, and advanced interconnects. These demands have led to innovations like Synopsys' UCIe-compliant interface IPs, enabling high-speed die-to-die connections and ensuring reliability through monitor, test, and repair capabilities.
    Packaging technology serves as a critical chiplet enabler, with advancements like hybrid bonding facilitating higher interconnect densities and cost-efficient stack-die solutions, he said. Abhijeet also underscored the role of AI in electronic design automation (EDA), leveraging AI for optimization, prototyping, and multi-physics analysis to tackle the complexity of multi-die systems. Looking ahead, Synopsys anticipates rapid growth in multi-die adoption, with over 50% of HPC designs expected to utilize the technology in 2025.
UCIe – Enabling Widespread Chiplet Adoption Across Industries

As the industry races to keep up with the pace of change, the UCIe (Universal Chiplet Interconnect Express) Consortium has gained momentum tackling chiplet integration challenges. During UCIe’s presentation at the summit, Brian Rea, marketing workgroup chair for the consortium, emphasized that the group is driving the future of chiplet integration by establishing open industry standards and fostering collaborative innovation. Milestones include the Pike Creek project, demonstrating multi-vendor interoperability, and UCIe 2.0, released in 2024, featuring enhancements for 3D integration, interconnect performance, and advanced packaging. These advancements address critical industry needs, such as scalability and seamless integration.

TechArena’s Take

So what is TechArena’s take on the state of chiplets? Like many industry-wide innovations, growing the foundational standards, IP, and tools are the first steps to massive adoption of technology. What we saw this week at the Chiplet Summit underscores industry advancement on all fronts, and examples of market traction felt more substantial as compared to last year’s keynotes. We expect to see broader application of multi-vendor chiplet solutions for custom chip delivery to fuel the insatiable compute demands of AI in 2025, bringing us closer to an open chiplet economy.

We can’t wait to hear more about standards development to support various aspects of multi-vendor delivery, such as common form factors. We believe the industry is aligned on these challenges being key gaps for urgent collaboration this year. Finally, we expect that 2025 editorial will feature a lot of this and more, as large providers continue to leverage this foundational ecosystem to drive the hundreds of billions in data center buildout expected globally.

Innovators from around the globe gathered in Santa Clara this week for the third annual Chiplet Summit to learn the latest in artifical intelligence (AI)/ machine learning (ML) acceleration, the open chiplet economy, advanced packaging methods, die-to-die interfaces, and more. The TechArena team had courtside seats for the event, and with chiplets being the underpinning technology for continued advancements of Moore’s Law, we were keen to understand the state of industry innovation.

We have been talking about an open chiplet economy on the TechArena platform since we named UCIe one of the top innovations in tech in 2022. The speed of design, efficiency of multi-process solution delivery, and opportunity for best-in-breed chiplet integration offers unquestionable opportunity for the market. But while all major compute architectures embrace chiplet designs today, the open economy of chiplet design has been slow to emerge due to missing elements, such as form factor designs, and interoperability testing still needing to be addressed by the industry. Technologists attending the event this week enjoyed opportunities to learn from industry pacesetters about the current state of addressing these gaps as discussions centered on how to create the latest chiplet designs in less time, for less cost, and in a more scalable way.

The event featured a series of keynotes from semiconductor heavyweights, including  Alphawave Semi, Arm, the Open Compute Project Foundation, Synopsys, and Teradyne. While these companies provided incredible insight to the state of the industry, there were a few takeaways that rose above the rest:

  1. The industry is embracing chiplet technologies as the foundation for future AI scaling – In a keynote delivered by Tony Chan Carusone, chief technology officer at Alphawave Semi and professor of electrical and computer engineering at University of Toronto, Tony unveiled Alphawave’s vision for the future of AI scaling, connectivity, and custom silicon solutions. He discussed how groundbreaking advancements in chiplet technology are enabling the rapid evolution of AI systems while addressing critical challenges of sustainability, cost, and performance. This included three pivotal challenges driving innovation in AI: the insatiable demand for compute, the diversity of workloads requiring customized hardware, and the increasing complexity and cost of traditional monolithic chip designs. He emphasized how chiplet-based architectures are uniquely positioned to address these issues. By leveraging modular design principles and advanced packaging technologies, chiplets enable faster development cycles, greater scalability, and optimized performance tailored to AI workloads.
    Alphawave Semi showcased its AlphaChip 1600 IO, a multi-protocol chiplet that supports diverse connectivity requirements, including Ethernet and optical solutions, vital for enabling next-generation AI infrastructures.
  2. Collaboration is becoming increasingly critical across the semiconductor supply chain – Eddie Ramirez, vice president of marketing at Arm, delivered an insightful keynote outlining the pivotal role of chiplet technology and ecosystem collaboration in addressing the challenges of AI scaling and democratizing computing. With expertise spanning data centers, networking, and edge solutions, Arm is enabling partners to accelerate innovation while optimizing cost and performance, he said. We first covered Arm’s ecosystem programs in this arena in 2023 at the Open Compute Foundation Summit, and it was terrific to see the groundswell of support Arm has developed with partners reflecting a broader trend in chiplet advancement.
    In his talk, Eddie noted that 80% of AI workloads are centering on inference, a shift from a more training-centric focus in 2024, requiring specialized silicon and heterogeneous integration to meet the needs of both data center and edge environments. Chiplet-based architectures provide a solution to the rising costs and complexity of traditional monolithic designs, enabling reusable silicon and faster development cycles. Eddie highlighted examples of Arm’s ecosystem collaborations, including NVIDIA’s Grace Hopper system and Amazon’s Graviton processors. Grace Hopper showcased true heterogeneous integration, combining Arm Neoverse CPUs with NVIDIA’s high-performance GPUs to enhance AI pipeline efficiency. Meanwhile, Amazon’s accelerated delivery of four Graviton processor generations in five years highlighted how chiplets can streamline development and boost performance.
    Eddie also discussed Arm’s NeoVerse CSS compute subsystems as a catalyst for advancing custom AI silicon. By pre-integrating IP and reducing design complexity, the company is empowering partners to bring AI-enabled CPUs to market faster, he said. He pointed to the Arm Total Design initiative and open-source chiplet system architecture (CSA) specification as steps toward standardizing chiplet designs and fostering a more accessible computing ecosystem.
  3. Multi-Die architectures hold increasing significance as critical to the future of AI-driven semiconductor growth Abhijeet Chakraborty, vice president of engineering at Synopsys, highlighted the transformative role of multi-die and chiplet technologies in driving semiconductor industry growth. With global semiconductor sales projected to approach $700 billion in 2025, Abhijeet[AK1]  emphasized how advanced chip designs are addressing the surging demand for AI and high-performance computing (HPC) applications. Multi-die architectures are growing in significance as essential for overcoming the limitations of monolithic designs in AI chip development, he said. AI accelerators, tailored for training and inference workloads, require immense compute power, high memory bandwidth, and advanced interconnects. These demands have led to innovations like Synopsys' UCIe-compliant interface IPs, enabling high-speed die-to-die connections and ensuring reliability through monitor, test, and repair capabilities.
    Packaging technology serves as a critical chiplet enabler, with advancements like hybrid bonding facilitating higher interconnect densities and cost-efficient stack-die solutions, he said. Abhijeet also underscored the role of AI in electronic design automation (EDA), leveraging AI for optimization, prototyping, and multi-physics analysis to tackle the complexity of multi-die systems. Looking ahead, Synopsys anticipates rapid growth in multi-die adoption, with over 50% of HPC designs expected to utilize the technology in 2025.
UCIe – Enabling Widespread Chiplet Adoption Across Industries

As the industry races to keep up with the pace of change, the UCIe (Universal Chiplet Interconnect Express) Consortium has gained momentum tackling chiplet integration challenges. During UCIe’s presentation at the summit, Brian Rea, marketing workgroup chair for the consortium, emphasized that the group is driving the future of chiplet integration by establishing open industry standards and fostering collaborative innovation. Milestones include the Pike Creek project, demonstrating multi-vendor interoperability, and UCIe 2.0, released in 2024, featuring enhancements for 3D integration, interconnect performance, and advanced packaging. These advancements address critical industry needs, such as scalability and seamless integration.

TechArena’s Take

So what is TechArena’s take on the state of chiplets? Like many industry-wide innovations, growing the foundational standards, IP, and tools are the first steps to massive adoption of technology. What we saw this week at the Chiplet Summit underscores industry advancement on all fronts, and examples of market traction felt more substantial as compared to last year’s keynotes. We expect to see broader application of multi-vendor chiplet solutions for custom chip delivery to fuel the insatiable compute demands of AI in 2025, bringing us closer to an open chiplet economy.

We can’t wait to hear more about standards development to support various aspects of multi-vendor delivery, such as common form factors. We believe the industry is aligned on these challenges being key gaps for urgent collaboration this year. Finally, we expect that 2025 editorial will feature a lot of this and more, as large providers continue to leverage this foundational ecosystem to drive the hundreds of billions in data center buildout expected globally.

Subscribe to TechArena

Subscribe