In 2025, the internet’s fragility and AI’s complexity collided in public. The big vendors responded by buying the pieces they need to sell the integrated story that they have AI risk under control.
Robots aren’t going to fold your laundry by February. But Voice of Innovation Niv Sundaram predicts that an urgent caregiver shortage will move humanoids “from warehouses to living rooms” in 2026.
Marvell is inking a deal for optical interconnect startup Celestial AI in a massive bet that the industry has shifted from being compute-constrained to bandwidth-constrained.
As AI training pushes data centers to unprecedented power densities, researchers reveal an affordable solution that lets computing thrive on fluctuating renewable energy.
In Part 1 of Voice of Innovation Matty Bakkeren’s 2026 predictions series, he explores how AI, power, cooling, and supply chains are reshaping data center infrastructure for a utility-scale future.
As up to 10 million jobs disappear and quality content moves behind paywalls, the question isn’t if AI will reshape society. It’s whether 2026 is the year we’ll control the burn or watch it spread.
Daniel Wu joins TechArena and Solidigm on Data Insights to share his perspective on bridging academia and enterprise, scaling AI responsibly, and why trustworthy frameworks matter as AI adoption accelerates.
AWS now ships 50% Arm-based compute, and other major cloud providers are following, as efficiency in the gigawatt era and software optimization drive a shift in data center architecture.
Backed by top U.S. investors, Cerebras gains $1.1B pre-IPO funding, boosting its AI vision, market traction, and challenge to NVIDIA with silicon-to-services expansion.
TechArena Voice of Innovation Tannu Jiwnani explains how to blend GenAI-assisted coding with continuous threat modeling, automated validation, and expert review to accelerate work without compromise.
From cloud to edge, agentic workflows are moving from pilots to production—reshaping compute, storage, and networks while spotlighting CPU control planes, GPU utilization, and congestion-free fabrics.
Dell outlines how flash-first design, unified namespaces, and validated architectures are reshaping storage into a strategic enabler of enterprise AI success.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
From CPU orchestration to scaling efficiency in networks, leaders reveal how to assess your use case, leverage existing infrastructure, and productize AI instead of just experimenting.
From the OCP Global Summit, hear why 50% GPU utilization is a “civilization-level” problem, and why open standards are key to unlocking underutilized compute capacity.
In the Arena: Allyson Klein with Axelera CMO Alexis Crowell on inference-first AI silicon, a customer-driven SDK, and what recent tapeouts reveal about the roadmap.
In this episode of Data Insights, host Allyson Klein and co-host Jeniece Wnorowski sit down with Dr. Rohith Vangalla of Optum to discuss the future of AI in healthcare.
From OCP Summit, Metrum AI CEO Steen Graham unpacks multi-agent infrastructure, SSD-accelerated RAG, and the memory-to-storage shift—plus a 2026 roadmap to boost GPU utilization, uptime, and time-to-value.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
From CPU orchestration to scaling efficiency in networks, leaders reveal how to assess your use case, leverage existing infrastructure, and productize AI instead of just experimenting.
From the OCP Global Summit, hear why 50% GPU utilization is a “civilization-level” problem, and why open standards are key to unlocking underutilized compute capacity.
In the Arena: Allyson Klein with Axelera CMO Alexis Crowell on inference-first AI silicon, a customer-driven SDK, and what recent tapeouts reveal about the roadmap.
In this episode of Data Insights, host Allyson Klein and co-host Jeniece Wnorowski sit down with Dr. Rohith Vangalla of Optum to discuss the future of AI in healthcare.
From OCP Summit, Metrum AI CEO Steen Graham unpacks multi-agent infrastructure, SSD-accelerated RAG, and the memory-to-storage shift—plus a 2026 roadmap to boost GPU utilization, uptime, and time-to-value.