As part of Flex, JetCool is scaling its microconvective cooling technology to help hyperscalers deploy next-gen systems faster, streamlining cooling deployments from server to rack in the AI era.
At GTC DC, NVIDIA outlined DOE-scale AI systems, debuted NVQLink to couple GPUs and quantum, partnered with Nokia on AI-RAN to 6G, mapped Uber robotaxis for 2027, and highlighted Synopsys’ GPU gains.
Storage architecture becomes the invisible force determining whether AI deployments, now rapidly moving beyond pilot projects, generate profit or burn cash on throttled tokens.
While enterprises pour resources into more GPUs, up to 30% of that computing power sits idle waiting for data. The solution isn't more hardware; it's smarter network architecture.
Design shifted to rack-scale. Power and cooling span the full path. Liquid is table stakes. Three takeaways from OCP 2025—and why CelLink’s PowerPlane fits an AI-factory mindset.
Converging forces, including affordable SSDs, ransomware requiring fast restoration capabilities, and AI workloads needing assured data integrity, are redefining protection strategies at unprecedented scale.
Discover AI’s role in scientific breakthroughs, advances in cooling, networking, and data management as TechArena dives into the innovations reshaping the world of supercomputing at SC24.
Four months into her tenure, Cornelis Networks' CEO Lisa Spelman opens up about her leadership approach, vision for AI’s potential, the value of leveraging collective expertise, and much more.
What Will You Do with 122? Solidigm is reshaping the data storage landscape with today’s announcement of the first-in-class, 122 terabyte D5-P5336 Drive.
In this Great Debate, a stellar line-up of industry experts delves into enterprise adoption of AI, the growth of AI in 2025 and beyond, the infrastructure backbone supporting this growth, and more.
The former Chief AI Strategist at DataRobot/ Dataiku and founder of VEOX Inc. – once known as Homeless Ben – delivered a jaw-dropping keynote address on Day 2 of MLOps World – GenAI Summit.
At ML Ops World/Gen AI Summit 2024, machine learning and AI students, professionals, and leaders from around the globe connect and build community, seeking tools and best practices to advance their work.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.
Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.
From SC25 in St. Louis, Nebius shares how its neocloud, Token Factory PaaS, and supercomputer-class infrastructure are reshaping AI workloads, enterprise adoption, and efficiency at hyperscale.
Runpod head of engineering Brennen Smith joins a Data Insights episode to unpack GPU-dense clouds, hidden storage bottlenecks, and a “universal orchestrator” for long-running AI agents at scale.
Billions of customer interactions during peak seasons expose critical network bottlenecks, which is why critical infrastructure decisions must happen before you write a single line of code.
Recorded at #OCPSummit25, Allyson Klein and Jeniece Wnorowski sit down with Giga Computing’s Chen Lee to unpack GIGAPOD and GPM, DLC/immersion cooling, regional assembly, and the pivot to inference.