Learn how Solidigm SSDs are delivering 10x-20x performance gains and 40% cost savings for enterprise AI during Supermicro’s Open Storage Summit this August.
OpenAI’s GPT-5 outperforms rivals in coding, context retention, and accuracy—setting a new bar for enterprise AI while signaling a subtle shift toward openness.
Market share shakeups, pricing shocks, and a tectonic shift in the open internet: Intel’s Lynn Comp unpacks developments in AI trends in 2025 that no one could have predicted.
Global surge in submissions reveals the pivotal role of storage in scaling AI training, with new checkpoint tests tackling failure resilience in massive accelerator clusters.
MLCommons launches industry-standard benchmarks for LLM performance on PCs, cutting through marketing hype and giving developers and enterprises the transparent metrics they need.
As Chinese EV giants like BYD rise, German automakers are forging an unlikely alliance, but history shows such partnerships often crumble within months.
Autonomous vehicles reduce accidents by using sensor fusion (cameras, radar, LIDAR) for better perception. Advances in AI, including CNNs and vision transformers, will enhance safety and performance.
We’re honed in on these trends for 2025: Will NVIDIA GPUs face real competition? What innovations will we see in AI fabric? And how will our planet support the power-hungry future of AI?
Jim Fister writes that AI's success relies on careful data management and strategic application rather than simply relying on AI to solve problems. He stresses the importance of data structure and veracity in generating meaningful insights.
Allyson Klein recaps an insightful conversation with co-host Jeniece Wnorowski and Ariel Pisetzky, VP of information technology and cyber at Taboola, about the transformative impact of data and AI on ad placement.
Urban Machine, featured on the TechArena podcast, uses AI and robotics to repurpose lumber, reducing construction waste. Recognized at SXSW, they won the 2024 Mighty Materials Award and seek $20M in EPA funding. Listen to our podcast with CTO Andrew Gillies.
Discover how AI is transforming pharma research, accelerating drug development, and reducing costs. This blog highlights key innovations like Health Technology Innovations, Inc.'s Cryo-FAST platform, which speeds up early-stage research. Learn about AI's impact through real-world examples of collaboration and implementation.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.