AWS now ships 50% Arm-based compute, and other major cloud providers are following, as efficiency in the gigawatt era and software optimization drive a shift in data center architecture.
Backed by top U.S. investors, Cerebras gains $1.1B pre-IPO funding, boosting its AI vision, market traction, and challenge to NVIDIA with silicon-to-services expansion.
TechArena Voice of Innovation Tannu Jiwnani explains how to blend GenAI-assisted coding with continuous threat modeling, automated validation, and expert review to accelerate work without compromise.
From cloud to edge, agentic workflows are moving from pilots to production—reshaping compute, storage, and networks while spotlighting CPU control planes, GPU utilization, and congestion-free fabrics.
From manure-to-energy RNG to an aluminum-air system that generates electricity on demand, innovators tackled real AI bottlenecks—power-chain integration, rapid fiber turn-ups, AI-driven permitting, and plug-and-play capacity that speeds time-to-value.
AMD improved energy efficiency 38x—roughly a 97% drop in energy for the same compute—and now targets 20x rack-scale gains by 2030, reimagining AI training, inference, and data-center design.
Enterprise AI doesn’t create fragility; it reveals undocumented assumptions, missing ownership, and invisible pipeline debt. Fix the foundations and AI gets cheaper, faster, and more trusted.
The deal moves Synopsys’ ARC processor IP and ASIP Designer/Programmer tools to GF’s MIPS business, while Synopsys keeps interface and foundation IP and leans further into AI-era engineering.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
Durgesh Srivastava unpacks a data-loop approach that powers reliable edge inference, captures anomalies, and encodes technician know-how so robots weld, inspect, and recover like seasoned operators.
Neeraj Kumar, Chief Data Scientist at PNNL, discusses AI's role in scientific discovery, energy-efficient computing, and collaboration with Micron to advance memory systems for AI and high-performance computing.
Guest Gayathri “G” Radhakrishnan, Partner at Hitachi Ventures, joins host Allyson Klein on the eve of the AIHW and Edge Summit to discuss innovation in the AI space, future adoption of AI, and more.
Join Allyson Klein and Jeniece Wnorowski as they chat with Rita Kozlov from Cloudflare about their innovative cloud solutions, AI integration, and commitment to privacy and sustainability.
Arun Nandi of Unilever joins host Allyson Klein to discuss AI's role in modern data analytics, the importance of sustainable innovation, and the future of enterprise data architecture.
Join Allyson Klein as she welcomes former colleague/ industry innovator Jen Huffstetler. Jen shares her extensive experience driving advancements from client devices to the data center, including groundbreaking technologies like Centrino and 3D packaging.
In this Tech Arena episode, Allyson Klein interviews Mehdi Daoudi, CEO of Catchpoint, on internet monitoring, observability innovations, and AI's impact on automation. Discover their latest tools enhancing performance.