At GTC 2025, a discussion between Deloitte and VAST showed how their partnership is scaling enterprise AI with secure, auditable infrastructure—bringing business value for next-gen, agentic AI adoption.
Verge.io’s George Crump shares how a unified infrastructure approach is driving efficiency, performance, and AI-readiness — without the legacy bloat.
At GTC 2025, Nebius and VAST shared how their collaboration delivers high-performance, scalable AI infrastructure for enterprise workloads—making cloud AI more usable and accessible.
MLPerf Inference 5.0 signals the rise of large language models, with LLAMA 2 70B surpassing ResNet-50 in submissions and driving next-gen AI performance across compute platforms.
MemryX, a provider of edge AI acceleration hardware, recently closed its latest round of funding, serving as a potential bellwether for the next growth edge in AI compute.
From VAST Data to Weka, Graid to Solidigm — storage disruptors shined bright at NVIDIA GTC 2025. Here’s how storage innovators are redefining AI infrastructure and why it matters to the future of AI.
While enterprises pour resources into more GPUs, up to 30% of that computing power sits idle waiting for data. The solution isn't more hardware; it's smarter network architecture.
Converging forces, including affordable SSDs, ransomware requiring fast restoration capabilities, and AI workloads needing assured data integrity, are redefining protection strategies at unprecedented scale.
From analytics to AI leadership, TechArena Voice of Innovation Banani Mohapatra (Walmart) shares how experimentation, ethics, and human creativity shape the next era of data-driven innovation.
Storage architecture becomes the invisible force determining whether AI deployments, now rapidly moving beyond pilot projects, generate profit or burn cash on throttled tokens.
CXL 2.0 unlocks pooled, tiered, and elastic memory so enterprises add capacity without blowing budgets—feeding AI and data-heavy apps with near-DRAM performance on Xeon 6 platforms.
The AI surge is forcing a fundamental rethink of infrastructure strategy, from unexpected co-location demand to storage breakthroughs that challenge conventional wisdom.
TechArena host Allyson Klein and Solidigm’s Jeniece Wnorowski chat with Taboola Vice President of Information Technology and Cyber, Ariel Pisetzky, about how his company is reshaping the marketing landscape with AI infused customer engagement tools.
TechArena host Allyson Klein chats with EY’s Global Innovation AI Officer, Rodrigo Madanes, about what he’s seeing from clients in their advancement with AI and what this means for the industry requirements for innovation.
TechArena host Allyson Klein chats with Intel’s Lisa Spelman about how compute requirements are changing for the AI era, where we are with broad enterprise adoption of AI, and how software, tools and standards are required to help implement solutions at scale.
TechArena host Allyson Klein interviews Netflix’s Tejas Chopra about how Netflix’s recommendation engines require memory innovation across performance and efficiency in advance of his keynote at MemCon 2024 later this month.
TechArena host Allyson Klein chats with Physia about their generative AI based patient care platform and how they aim to create a new AI + doctor model to improve patient care and transform the medical industry.
TechArena host Allyson Klein chats with Artefacto’s Anna Giralt Gris about her views on the future of film and the impact that AI will make in re-shaping one of humanity’s most creative mediums.
From racing oils to data center immersion cooling, Valvoline is reimagining thermal management for AI-scale workloads. Learn how they’re driving density, efficiency, and sustainability forward.
This Data Insights episode unpacks how Xinnor’s software-defined RAID for NVMe and Solidigm’s QLC SSDs tackle AI infrastructure challenges—reducing rebuild times, improving reliability, and maximizing GPU efficiency.
In this episode, Allyson Klein, Scott Shadley, and Jeneice Wnorowski (Solidigm) talk with Val Bercovici (WEKA) about aligning hardware and software, scaling AI productivity, and building next-gen data centers.
From AI Infra Summit, Celestica’s Matt Roman unpacks the shift to hybrid and on-prem AI, why sovereignty/security matter, and how silicon, power, cooling, and racks come together to deliver scalable AI infrastructure.
Allyson Klein talks with Synopsys’ Anand Thiruvengadam on how agentic AI is reshaping chip design to meet extreme performance, time-to-market, and workforce challenges.
From storage to automotive, MLPerf is evolving with industry needs. Hear David Kanter explain how community-driven benchmarking is enabling reliable and scalable AI deployment.