The pace of AI innovation is accelerating, and Sema4.ai’s vision goes beyond large language models (LLMs) to the transformative potential of AI agents. These agents, unlike traditional software, complete tasks autonomously, acting as knowledge workers that can reason, collaborate, and deliver work products. Sema4 pioneers this technology, offering AI agents optimized for specific industries, enhancing productivity significantly.
Lisa Spelman introduced as new CEO of Cornelis Networks
Unveiling the Role of Advanced Semiconductor Packaging in Powering AI: Explore the innovations in 2.5D and 3D packaging, high bandwidth memory, and chiplet solutions driving AI infrastructure into the future.
TechArena’s take from Satya Nadella’s keynote at MSBuild 2024. This post covers infrastructure, silicon collaborations and service delivery.
TechArena’s take on custom silicon advancements in the AI era with Alphawave Semi.
As AI training pushes data centers to unprecedented power densities, researchers reveal an affordable solution that lets computing thrive on fluctuating renewable energy.
In Part 1 of Voice of Innovation Matty Bakkeren’s 2026 predictions series, he explores how AI, power, cooling, and supply chains are reshaping data center infrastructure for a utility-scale future.
As up to 10 million jobs disappear and quality content moves behind paywalls, the question isn’t if AI will reshape society. It’s whether 2026 is the year we’ll control the burn or watch it spread.
New liquid cooling solutions have created a critical new system for data centers, and the company's “magic dust” additive packages are proving essential to keep AI running.
Intel veteran and Machani Robotics CSO/CTO Niv Sundaram, one of TechArena’s newest voices of innovation, talks emotionally intelligent AI, companion humanoids, and why real innovation starts and ends with human wellbeing.
With AI racks exceeding 100kW, immersion cooling isn’t optional anymore. Midas’s operator-driven design delivers hot-swappable maintenance and thermal recovery economics.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.