Tech industry vet Lynn Comp explores the benefits of industry partnerships and why it’s critical to understand one’s own business drivers and their partner’s to achieve success.
Iceotope's liquid cooling tech is shaking up data centers as AI drives the need for efficient heat management. Dr. Kelley Mullick explains the shift from air to liquid cooling, highlighting Iceotope's sustainable solutions.
Jim Fister dives deep into the intricacies of system memory, latency, and data management, exploring how modern computing architectures handle data retrieval and processing.
AI is driving an acceleration of compute demands fueled by the proliferation of large language models across industry use cases. This requirement comes as traditional semiconductor technology pushes against the laws of physics with the slowing of Moore’s Law.
This week, Contextual AI partnered with WEKA to deliver enterprise AI services on Google Cloud using RAG 2.0 from Facebook AI Research. WEKA's platform boosted performance, achieving a 3X increase in key AI use cases, 4X faster model checkpointing, and reduced costs.
Data center industry veteran Lynn Comp shares insights from her career, discussing the importance of adaptability, emotional IQ, and practical tech. She highlights three patterns: customers prefer reliability over high-performance solutions, network limitations can negate compute power, and economic realities can override technological enthusiasm.
Open models move fast—but production doesn’t forgive surprises. Lynn Comp maps how to pair open-source AI with a solid CPU foundation and orchestration to scale from pilot to platform.
What modern storage really means, how on-prem arrays compare to first-party cloud services, and a clear checklist to pick the right fit for cost, control, scalability, and resilience.
As part of Flex, JetCool is scaling its microconvective cooling technology to help hyperscalers deploy next-gen systems faster, streamlining cooling deployments from server to rack in the AI era.
Ventiva discusses how hard-won laptop cooling know-how can unlock inside-the-box gains for AI servers and racks—stabilizing hotspots, preserving acoustics, and boosting performance.
At GTC DC, NVIDIA outlined DOE-scale AI systems, debuted NVQLink to couple GPUs and quantum, partnered with Nokia on AI-RAN to 6G, mapped Uber robotaxis for 2027, and highlighted Synopsys’ GPU gains.
Design shifted to rack-scale. Power and cooling span the full path. Liquid is table stakes. Three takeaways from OCP 2025—and why CelLink’s PowerPlane fits an AI-factory mindset.
TechArena host Allyson Klein chats with Alphawave Semi’s Letizia Guiliano about the future of semiconductor innovation across memory, optical and interoperable chiplet solutions and how her company is poised to deliver leadership innovation rooted in standards.
TechArena host Allyson Klein chats with Memverge founder and CEO Charles Fan about his company’s disruptive vision for breaking through data center memory limitations and what the CXL standard will bring to infrastructure innovation.
TechArena host Allyson Klein chats with AMD senior fellow and CXL technical taskforce co-chair Mahesh Wagh regarding AMD’s entry of CXL platforms into the market gen 4 AMD EYPC processors and his organization’s strategy to deliver disruptive innovation utilizing CXL capability in the years ahead.
TechArena host Allyson Klein chats with NVIDIA Data Center Product Architect and Universal Chiplet Interconnect Express organization board member Durgesh Srivastava about the new UCIe specification and how it will reshape the foundations of compute architectures.
TechArena host talks with Bev Crair, Senior Vice President of Oracle Cloud Infrastructure Compute about her team’s delivery of bespoke services and efforts to simplify multi-cloud adoption.
TechArena host Allyson Klein chats with Checkpoint Software's TJ Gonen about the state of cloud security and how security solutions must start with a developer lens
From #OCPSummit25, this Data Insights episode unpacks how RackRenew remanufactures OCP-compliant racks, servers, networking, power, and storage—turning hyperscaler discards into ready-to-deploy capacity.
Allyson Klein and co-host Jeniece Wnorowski sit down with Arm’s Eddie Ramirez to unpack Arm Total Design’s growth, the FCSA chiplet spec contribution to OCP, a new board seat, and how storage fits AI’s surge.
Midas Immersion Cooling CEO Scott Sickmiller joins a Data Insights episode at OCP 2025 to demystify single-phase immersion, natural vs. forced convection, and what it takes to do liquid cooling at AI scale.
From hyperscale direct-to-chip to micron-level realities: Darren Burgess (Castrol) explains dielectric fluids, additive packs, particle risks, and how OCP standards keep large deployments on track.
From OCP San Jose, PEAK:AIO’s Roger Cummings explains how workload-aware file systems, richer memory tiers, and capturing intelligence at the edge reduce cost and complexity.
Innovative power delivery unlocks a shift in data-center design. CelLink PowerPlane routes thousands of amps in a flat, flexible circuit—cutting cabling and accelerating AI factory builds.