Intel's decision to outsource marketing to Accenture and generative AI sparks debate: is this a visionary leap into the future of work or a symptom of a deeper retreat from innovation leadership?
Purpose-built for agentic AI, WEKA’s NeuralMesh delivers microsecond data access, self-healing resilience, and exabyte-scale performance for the next generation of real-time AI workloads.
From GTC to Data Center World, Hypertec and Solidigm are showcasing immersion-born infrastructure that’s purpose-built for high-density, sustainable AI and HPC workloads.
At Advancing AI, AMD unveils MI355 with 35× gen-over-gen gains and doubles down on open innovation – from ROCm 7 to Helios infrastructure – to challenge NVIDIA’s AI leadership.
The deal marks a strategic move to bolster Qualcomm’s AI and custom silicon capabilities amid challenging competition and the potential start of a wave of AI silicon acquisitions.
A new partnership combines WEKA’s AI-native storage with Nebius’ GPUaaS platform to accelerate model training, inference, and innovation with microsecond latency and extreme scalability.
The deal moves Synopsys’ ARC processor IP and ASIP Designer/Programmer tools to GF’s MIPS business, while Synopsys keeps interface and foundation IP and leans further into AI-era engineering.
As AI breaks the networking playbook and data centers hit the power wall, the optics industry enters a chaotic “2003 moment.” Mark Grodzinsky explores why the lessons of Wi-Fi will define the winners of the AI era.
Deterministic wireless is becoming the nervous system of AI. As robots and XR scale, “best effort” turns into business risk—and networks must deliver predictable, identity-driven, secure performance.
Allyson Klein predicts inference spreading from cloud to edge, agentic oversight reshaping ops, privacy battles intensifying, scientific computing facing brain drain, and quantum finally breaking through.
Deploying the future: At CES 2026, the Arm ecosystem is delivering AI from the cloud to the front lines—powering mobility, robotics, and personal computing with fast, efficient, on-device intelligence.
By delivering AI performance with one-sixth the hardware footprint, PEAK:AiO is redefining software-defined storage to make scalable AI infrastructure more affordable, efficient, and open.
TechArena host Allyson Klein chats with Intel’s Lisa Spelman about how compute requirements are changing for the AI era, where we are with broad enterprise adoption of AI, and how software, tools and standards are required to help implement solutions at scale.
TechArena host Allyson Klein interviews Netflix’s Tejas Chopra about how Netflix’s recommendation engines require memory innovation across performance and efficiency in advance of his keynote at MemCon 2024 later this month.
TechArena host Allyson Klein chats with Physia about their generative AI based patient care platform and how they aim to create a new AI + doctor model to improve patient care and transform the medical industry.
TechArena host Allyson Klein chats with Artefacto’s Anna Giralt Gris about her views on the future of film and the impact that AI will make in re-shaping one of humanity’s most creative mediums.
TechArena host Allyson Klein talks to MemryX VP of Product and Business Development Roger Peene about how his company is transforming AI at the edge with their silicon and how we’re sitting in an AI revolution.
TechArena host Allyson Klein chats with Unravel Data CEO Kunal Argawal about how his organization is tapping AI to disrupt the data observability arena.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.