
From racing oils to data center immersion cooling, Valvoline is reimagining thermal management for AI-scale workloads. Learn how they’re driving density, efficiency, and sustainability forward.

This Data Insights episode unpacks how Xinnor’s software-defined RAID for NVMe and Solidigm’s QLC SSDs tackle AI infrastructure challenges—reducing rebuild times, improving reliability, and maximizing GPU efficiency.

AI Infra Summit is an encapsulation of AI’s advancement in event form. This annual gathering in the bay has grown from tiny upstart just a few years ago to a must-attend conference for AI infrastructure providers reflecting the advancement of AI into the enterprise. It’s also a must-attend event for TechArena as we seek insight on exactly where AI value is being extracted across industries and how the industry is migrating from brute force to elegant deployments. The 2025 edition did not disappoint for key insights – let’s get started.
We’ve been following the transition of pure LLM activation to agentic computing integration into enterprise workflows. I have to admit, I thought that agents would take their own sweet time gaining employee access within large firms as IT departments navigate data security, trust, and privacy concerns associated with agentic deployment. Talks with IT leaders tell a different story with adoption taking flight across market sectors.
No conversation brought this more to light than my interview with Walmart’s SVP of Enterprise Business Services, David Glick. Glick shared his organization’s advancement of multiple classes of agents focused at support of IT engineers and his group’s plans for broad scale agentic deployment across the massive retailer’s broad functions. The specificity of his narration demonstrated that this was not just talk. Walmart is aggressively utilizing agents as an employee aid to accelerate work productivity and drive efficiency to the business. And Glick was not alone. I heard Glick’s enthusiasm for agentic computing’s value well beyond what we’ve seen from true use case value from LLMs echoed in my discussions with Arun Nandi, AI lead at Carrier; Nikhil Tyagi, senior manager of Emerging Devices Innovation at Verizon Business; Rohith Vangalla, lead software engineer at Optum Technologies; and Anusha Nerella, a leading AI innovator in the financial services space.
While broad enterprise use cases were the center of the conversation, agentic computing also held focus on two other topics at the Summit: advancement of silicon to keep pace with customer demand and broad changes in how data is accessed and remembered within agentic workflows.
To unpack the former, we talked with Anand Thiruvengadam, senior director and head of AI product management at Synopsys, as he shared the news of delivery of the company’s LLM Copilot for silicon engineers as its first phase of full agentic tool delivery to this foundational use case. We’ve written about Synopsys a lot on TechArena, and this announcement was an expected advancement from the company. Still, it’s terrific to see that they are progressing with their master plan on schedule and gaining market traction with partner collaborations along the way.
And while Synopsys is the market leader in delivering this technology breakthrough to silicon engineers, they aren’t alone in driving advancement. We met with Shashank Chaurasia, co-founder and chief AI officer at Moores Lab AI, at the show, an emerging player led by a team of former Microsoft silicon architects and engineers (yes, those folks who actually make their own silicon). They have delivered full agentic AI to accelerate universal verification methodology (UVM)-based verification flow and are claiming traction with the who’s who of silicon development with their new capabilities. While this addresses a small slice of silicon design, we walked away with two insights. First, agentic integration in this space will be driven quickly based on a critical need for silicon design teams to accelerate product design cycles while also widening chip delivery for custom solutions. Second, there’s a unique alignment for development of agentic control from those with experience in the domain, and this should influence how we see startups arrive for different function integration across a broad swath of use cases.
And as we weave a gordian knot of interconnected advancements on display at the Summit, we shift to what agentic computing means to the breadth of infrastructure architecture. One thing that stood out to me was the growing importance of storage architectures within an agentic world. Anyone who has actually used an agent knows that sustained memory is critical for workflow completion, and leading industry voices echoed this sentiment in discussions, including David Kanter, founder of MLCommons and head of MLPerf, and Daniel Wu, AI executive and educator. Key takeaway? Expect storage innovation from media to systems and orchestration to continue to advance at a frantic pace as operators build out agentic computing’s ability to remember.
Workflow advancement also showcased the importance of compute innovation and also uncovered some surprising trends on where and what compute is required to fully deliver agentic advancement. We started the show a chat with Mohamed Awad, SVP and GM of infrastructure business at Arm, where he shared how his company has grown massive traction in the data center through efficient CPU delivery. Of course, we’ve seen this with hyperscale adoption of Arm as its chosen core for indigenous silicon advancement, but we also are seeing broader market traction as Arm cores become more ubiquitous and things like workload portability become less of a hurdle to manage for IT administrators.
In case you’ve forgotten, Arm is also the Grace in Grace Hopper, and while team green gets most of the credit for heady performance delivered for AI factories, Grace is delivering important control at multiple points along an AI workflow, especially when we forward our focus to agentic computing.
Arm demonstrated this at the conference, highlighting how the Arm Neoverse processor handle many critical elements of the agentic workflow. Utilizing a Gmail automation use case, emails were fetched and analyzed for intent. Based on this analysis, specialized agents were then deployed to schedule, summarize and create replies. It was a terrific reminder that as the “head node” within the system, the CPU was acting as the workflow control center, managing system resources, igniting actual execution of the workflow like retrieving and delivering emails, pre-processing data, and triggering actions based on the analysis delivered by the GPU. If we think about what CPUs and GPUs are good at, this makes perfect sense, and Arm gave us a good reminder that CPUs remain master of the domain and could be argued to have a renaissance of relevance coming as agentic workflows become much more advanced.
Arm, of course, was far from the only processor vendor on hand. A slew of accelerator vendors showcased their latest advancements, including the largest processor on the planet, Cerebras, and arguably the most advanced power sipping accelerators around, Axelera AI. We loved to see both of these extremes on display as they provided insight into the breadth of deployment scenarios driving AI adoption today, from the Cerebras clouds popping up all across the globe to Axelera AI’s target of edge implementations requiring dialed in performance for power sensitive environments. We’re excited to see these companies gain market traction, noting that open innovation will fuel advancement to keep the entire industry working together to the customer’s benefit.
Our final takeaway is likely the largest challenge facing transforming AI data centers from brute force compute delivery to elegant deployments that balance performance, efficiency, and scale. The truth is that an alarming percentage of GPU utilization is left sitting idle in data centers today…waiting for data to process. When you consider the vast sum being spent on these processors, it’s clear that increasing GPU utilization should be a rallying cry for the entire industry to solve. And while we’ve already discussed storage architecture and the transformation of data pipelines for AI workflows, another area is in need of urgent advancement: AI data center networks. Today, network congestion is the main culprit of low utilization rates with a combination of antiquated technology and poor network architecting for current requirements to blame.
Cornelis Networks has emerged as a vendor with a solution for this challenge, with the introduction of their CN5000 network solutions delivering congestion-free networking dialed in for AI workloads. I caught up Cornelis CEO Lisa Spelman at the Summit, and she confirmed that this challenge is being felt across hyperscale, neocloud, and enterprise. “Right now infrastructure is holding back the next great discovery. It’s holding back the next human achievement. It’s holding back the next business evolution…and we want to provide a path to unlocking that,” she said.
So what’s the TechArena take from all of this advancement? Those expecting to see the chasm of AI adoption emerge may be disappointed as enterprises heat up agentic solutions for deployment across job functions. And while early LLM use cases have been somewhat limited to customer support, marketing, and other read/write heavy environments, agents will go deeper into every corner of business, fueling a broad adoption of AI infra from cloud instances, to enterprise on-prem, to the edge. The diversity of opportunity should rise, even if some players only carve out small segments of success, purely based on the gargantuan scale of deployment. And we’ll inch our way closer to elegant as advances are made across every element of infrastructure from compute, storage and networking to power delivery, cooling, and application oversight. Thanks to the AI Infra Summit team for putting on such a valuable conference and practitioners and vendors alike sharing their views on the state of advancement.

In this episode, Allyson Klein, Scott Shadley, and Jeneice Wnorowski (Solidigm) talk with Val Bercovici (WEKA) about aligning hardware and software, scaling AI productivity, and building next-gen data centers.

At Yotta 2025, I had the privilege of judging Innovate Arena sessions—a Shark Tank-style innovation contest in which innovators pitched their latest concepts reimagining how we power, cool, and sustain the digital world.
From the stage, the mix of rigor and audacity was unmistakable. The ideas weren’t incremental; they aimed at what’s next—and they stuck with me.
Innovate Arena spotlighted new approaches to some of the most pressing challenges facing our industry, from infrastructure efficiency to breakthrough applications. My role as judge gave me a front-row seat to entrepreneurs and visionaries who are imagining new tech frontiers.
What struck me most throughout the sessions was the diversity of ideas. Some solutions were deeply technical, drilling into hardware optimization and software integration. Others were transformative in their vision—rethinking the very inputs we use to generate and deliver power. The range itself underscored that innovation isn’t one-size-fits-all. The answers to today’s challenges will come from unexpected directions, and we need to create more venues like this to give them visibility and foster open collaboration.
Judging these sessions was a blast. I got to kick the tires on bold ideas, press on the business logic, and challenge assumptions. The energy from startups and smaller teams was palpable—the spark that keeps this industry moving.
Several presentations stood out for the way they attacked real bottlenecks across the AI landscape—everything from how we move electrons to how we pull permits. We saw farm-level methane captured and upgraded into RNG to decarbonize energy inputs, a tightly integrated power chain to accelerate delivery inside the data center, and aluminum-air systems offering cleaner, on-demand backup power. Add in smarter fiber deployments at giga-scale, a reimagined critical-power supply chain, AI that compresses permitting from months to hours, and modular/immersion builds that deploy fast with ultra-low PUEs—and you get a snapshot of innovation that’s practical, scalable, and ready for the AI era.
Amp Americas – Turning Manure into Megawatts: Capturing methane from dairy operations and upgrading it into pipeline-quality RNG, displacing fossil fuels while slashing emissions. Impact to date includes abatement tied to more than 170,000 cows and over 2.3M metric tons of CO2e avoided—showing how circular-economy thinking can deliver real grid value at scale.
DG Matrix – Integrating the Power Chain: Led by CEO Haroon Inam, DG Matrix is laser-focused on accelerating power delivery inside the data center. Their “power router” approach emphasizes tight integration of power electronics and controls to move more electrons, more efficiently, from the grid edge to the rack—exactly the kind of systems thinking AI facilities need.
Phinergy – Aluminum-Air Backup Without the Smoke: CEO Emmanuel Levy and team are advancing aluminum-air technology as on-demand, zero-emission backup power. For data centers and telecom, it offers a compelling alternative to diesel gensets—high-energy density, rapid availability, and a cleaner path to meeting uptime and sustainability targets.
Via Photon – Smarter Fiber for AI Factories: Via Photon tackled one of the most overlooked challenges in AI infrastructure: fiber optic cabling. With gigawatt-scale facilities requiring up to 10 million strands of fiber, their pre-terminated, factory-tested modules cut installation time, reduce rework, and protect against damage, helping data centers go live faster and at lower risk.
Hyper – Reinventing Supply Chains: Hyper is rethinking how we build critical power infrastructure. By tapping latent capacity in adjacent industries like aerospace and automotive, they can rapidly scale manufacturing of switchboards, PDUs, and RPPs. Paired with their Hyperspace portal, they offer end-to-end transparency and QA so customers get certainty in an uncertain supply chain.
Blumen – AI for Permitting: Blumen addressed a pain point every developer knows too well: permitting delays. Their platform digitizes zoning codes and merges them with thousands of geographic datasets, using AI to analyze requirements in hours instead of months. For data centers facing new local rules, that speed can be the difference between breaking ground or walking away.
DUG – Immersion Cooling & Modularity: DUG has proven immersion cooling at scale with near-optimal efficiency, and is now packaging that capability into modular “Nomad” units—10-foot and 40-foot containers that can be shipped, deployed, and plugged in quickly. The result: sustainable, mobile compute capacity with PUEs as low as ~1.03.
Mod42 – Modular Data Centers: Mod42 takes a ground-up modular approach. Their factory-built data centers can deploy up to 60% faster and at lower cost than traditional builds, while reducing land disturbance and improving site density, exactly the combination the AI era needs to scale responsibly.
Across the sessions, a few themes consistently emerged.
First, efficiency is non-negotiable. Every presentation, whether explicitly or not, touched on how solutions must reduce environmental impact. Our industry is under increasing pressure to deliver on efficiency goals, and innovators are stepping up.
Many of the best concepts drew on adjacent disciplines—agriculture, materials science, industrial engineering—bringing fresh tools to familiar problems in compute, power, and cooling.
And thirdly, the ecosystem matters. Even the most brilliant innovation needs a supportive ecosystem of investors, policymakers, and infrastructure providers to move from concept to scale. Many presenters acknowledged this, speaking to how they plan to bridge the gap from prototype to production.
Serving as a judge was both an honor and a learning experience. The energy of the presenters was infectious, and their passion reminded me why I fell in love with this industry in the first place. Too often, we get caught up in the incremental—the next quarter, the next benchmark, the next feature release. Innovate Arena reminded me of the importance of stepping back to look at the big picture: where is the world headed, and how can technology be harnessed to make it better?
Props to Yotta for creating this platform. Giving innovators the stage, and giving industry leaders the chance to engage with them directly, is how we accelerate progress. The format worked beautifully—part competition, part collaboration, and all inspiration.
As I left the Innovate Arena sessions, I felt a renewed sense of optimism. The challenges we face, from energy efficiency in AI infrastructure to sustainable growth in data centers, are real and daunting. But they are not insurmountable. Events like this prove that the ingenuity, creativity, and drive to solve them are alive and well.
My biggest takeaway: innovation lives in barns, in universities, in startups, and in the imaginations of people bold enough to ask, “What if?”
I am grateful for the chance to play a part in this process, and I look forward to seeing how these ideas evolve in the months and years to come.

From AI Infra Summit, Celestica’s Matt Roman unpacks the shift to hybrid and on-prem AI, why sovereignty/security matter, and how silicon, power, cooling, and racks come together to deliver scalable AI infrastructure.

Discover how JetCool’s proprietary liquid cooling is solving AI’s toughest heat challenges—keeping data centers efficient as workloads and power densities skyrocket.

Solidigm’s Ace Stryker joins Allyson Klein and Jeniece Wnorowski on Data Insights to explore how partnerships and innovation are reshaping storage for the AI era.

From storage to automotive, MLPerf is evolving with industry needs. Hear David Kanter explain how community-driven benchmarking is enabling reliable and scalable AI deployment.

As the demand for AI scales and the energy footprint of data centers comes under sharper scrutiny, AMD is pushing the boundaries of what’s possible in efficiency. The company surpassed its ambitious energy-efficiency goal ahead of schedule, with a 38x improvement, and is now setting its sights even higher: a 20x rack-scale efficiency target by 2030.
In this Five Fast Facts Q&A, I sat down Justin Murrill, senior director of corporate responsibility at AMD, to explore what these milestones mean in practice, from slashing carbon emissions for AI training to reimagining rack-level design—and how innovation in hardware, software, and ecosystem collaboration will be key to building a more sustainable future for compute.
When we set our 30x25 goal, we wanted to ensure it was rooted in a clear benchmark and represented real-world energy use.[i] We worked closely with renowned compute energy efficiency researcher and author, Dr. Jonathan Koomey, to develop a goal methodology that includes segment-specific data center power utilization effectiveness (PUE) and typical energy consumption for accelerated computing used in HPC and AI-training workloads.
The practical implication is that data centers utilizing AMD CPUs and GPUs can achieve the same computing performance with 97% less energy when compared to systems from just five years ago. This represents more than a 2.5x acceleration over industry trends from the previous five years (2015-2020). We achieved this through deep architectural innovations, aggressive performance-per-watt gains across our data center GPU and CPU products, and software optimizations.
Our teams are accelerating innovation to improve energy efficiency, which will continue to have ripple effects. On the software side, we can continue to drive enhancements well after products ship. As exciting as it was to beat our goal, we are looking forward to the advances we will continue to make.
As workloads scale and demand continues to rise, node-level efficiency gains won't keep pace. The progression of our product goals to rack-level reflects an expanding ambition and business strategy to optimize a broader portion of the ecosystem. This is reflected in our journey to building a best-in-class portfolio to address the rapidly evolving AI market. Over the last few years, we have made several strategic acquisitions to expand our AI software, hardware and systems capabilities, including scaling to full rack-level system design with the acquisition of ZT Systems.
Our new rack-scale goal outpaces the historical industry improvement trend (2018 to 2025) by nearly 3x. To demonstrate the real-world implications of our goal, we used a typical AI model in 2025 as a benchmark, which today requires more than 275 racks for training. With the energy efficiency gains we plan to make, we believe we could accomplish this training with less than one fully utilized rack in 2030.[ii] This rack consolidation could enable more than a 95% reduction in operational electricity use and a 97% reduction in carbon emissions.
Our environmental sustainability goals span our operations, supply chain and products, and are integrated into how we conduct business responsibly. Increasing the computing performance delivered per watt of energy consumed is a vital aspect of our corporate strategy, our climate strategy, and our ethos to tackle some of the world’s most important challenges. Global electricity consumption trends show a collective trajectory to consume more energy than the market can support within the next two decades.[iii] Further, many of our customers have energy efficiency and GHG emissions goals of their own. Therefore, the need for innovative energy and computing solutions is becoming increasingly important – perhaps nowhere more so than in the data center.
We also see opportunities for AI to advance overall data center sustainability. For example, AI-driven power management can identify inefficiencies, like underused or overtaxed equipment, and automatically adjust the allocation of resources for optimal power consumption. AMD GPUs, CPUs, adaptive computing, networking and software are designed to all work together seamlessly to help optimize data center energy management systems by adjusting workloads and system configurations.
Beyond the data center, AMD today is the only provider delivering end-to-end AI solutions. Being able to deliver AI compute locally on a device – whether a PC or a processor embedded at the edge – can help reduce the power burden on data centers.
We know that on top of the hardware and system-level improvements, even greater AI model efficiency gains will be possible through software optimizations. We estimate these additional gains could be up to 5x over the goal period as software developers discover smarter algorithms and continue innovating with lower-precision approaches at current rates. [iv]
This is why we believe the open ecosystem is so important for AI innovation. By harnessing the intelligence of the broader developer community, we can accelerate energy efficiency improvements. While AMD is not claiming that full multiplier in our own goal, we are proud to provide the hardware foundation that enables it and to support the open ecosystem and developer community working to unlock those gains.
Whether through open standards, our open software approach with AMD ROCm™, or our close collaboration with our partners, AMD remains committed to helping innovators everywhere scale AI more efficiently.
At AMD, we have repeatedly demonstrated the ability to lay out a vision for computing energy efficiency, project the pathway of innovations, and execute on our roadmap. We have significantly expanded our engineering talent pool with the best and brightest minds to deliver some of the world’s most advanced chips, software, and enterprise AI solutions. We’ve also steadily increased our investment in research and development to drive ongoing innovation in compute performance and efficiency. These strategies, along with comprehensive design solutions, will support exponential growth of both improved performance and increased energy efficiency.
Fundamentally, our culture at AMD thrives on setting big goals that address important challenges and require new ways of thinking. We will continue to transparently report annually on our progress toward our goals and work with third-parties on measurement and verification. You can read more about our recent progress in our 30th annual Corporate Responsibility Report.
[i] Includes high-performance CPU and GPU accelerators used for AI training and High-Performance Computing in a 4-Accelerator, CPU hosted configuration. Goal calculations are based on performance scores as measured by standard performance metrics (HPC: Linpack DGEMM kernel FLOPS with 4k matrix size; AI training: lower precision training-focused floating-point math GEMM kernels operating on 4k matrices) divided by the rated power consumption of a representative accelerated compute node including the CPU host + memory, and 4 GPU accelerators.
[ii] AMD estimated the number of racks to train a typical notable AI model based on EPOCH AI data (https://epoch.ai). For this calculation we assume, based on these data, that a typical model takes 1025 floating point operations to train (based on the median of 2025 data), and that this training takes place over 1 month. FLOPs needed = 10^25 FLOPs/(seconds/month)/Model FLOPs utilization (MFU) = 10^25/(2.6298*10^6)/0.6. Racks = FLOPs needed/(FLOPS/rack in 2024 and 2030). The compute performance estimates from the AMD roadmap suggests that approximately 276 racks would be needed in 2025 to train a typical model over one month using the MI300X product (assuming 22.656 PFLOPS/rack with 60% MFU) and <1 fully utilized rack would be needed to train the same model in 2030 using a rack configuration based on an AMD roadmap projection. These calculations imply a >276-fold reduction in the number of racks to train the same model over this six-year period. Electricity use for a MI300X system to completely train a def
[iii] “The Decadal Plan for Semiconductors,” Semiconductor Research Corporation, https://www.src.org/about/decadal-plan/ (accessed May 23, 2024).
[iv] Regression analysis of achieved accuracy/parameter across a selection of model benchmarks, such as MMLU, HellaSwag, and ARC Challenge, show that improving efficiency of ML model architectures through novel algorithmic techniques, such as Mixture of Experts and State Space Models for example, can improve their efficiency by roughly 5x during the goal period. Similar numbers are quoted in Patterson, D., J. Gonzalez, U. Hölzle, Q. Le, C. Liang, L. M. Munguia, D. Rothchild, D. R. So, M. Texier, and J. Dean. 2022. "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink." Computer. vol. 55, no. 7. pp. 18-28.”
Therefore, assuming innovation continues at the current pace, a 20x hardware and system design goal amplified by a 5x software and algorithm advancements can lead to a 100x total gain by 2030.

Allyson Klein talks with Synopsys’ Anand Thiruvengadam on how agentic AI is reshaping chip design to meet extreme performance, time-to-market, and workforce challenges.

With sustainability at the core, Iceotope is pioneering liquid cooling solutions that reduce environmental impact while meeting the demands of AI workloads at scale.

Dell outlines how flash-first design, unified namespaces, and validated architectures are reshaping storage into a strategic enabler of enterprise AI success.
.webp)
The rise of AI has brought unprecedented pressure on the power and cooling systems that sustain today’s data centers. Racks that once drew 30kW are now pushing 100kW or more, with 1MW configurations on the horizon. Meeting these demands isn’t just about scaling capacity — it’s about rethinking the entire power delivery chain for maximum efficiency and sustainability.
Flex, a global leader in manufacturing and critical power solutions, is tackling this challenge head-on. From high-voltage DC architectures and 97.5% efficient power shelves to integrated liquid cooling and vertically integrated “grid to chip” solutions, the company is reshaping how data centers operate in the AI era.
In this “5 Fast Facts on Compute Efficiency” conversation, I sat down with Chris Butler, President of Embedded and Critical Power at Flex, to explore how innovations in power, cooling, and manufacturing scale are unlocking new levels of efficiency, and how these breakthroughs could redefine sustainable AI infrastructure in the years ahead.
A1: As AI workloads push data center rack densities higher, data center operators are fundamentally rethinking power architectures to meet energy consumption demands with maximum efficiency, scalability, and sustainability. A broader industry shift toward high-voltage DC architectures, particularly +/- 400 V DC and 800 V DC, has the potential to reduce conduction losses, enable longer cable runs, and minimize the conversion stages required to step power down from the grid, improving system efficiency and reducing thermal management overhead.
Flex collaborates with hyperscalers well in advance of new standards and product introductions to ensure their power architectures are innovation-ready — an example being our recently announced power shelf system that is optimized for NVIDIA GB300 NVL72 platforms. Achieving 97.5% efficiency at half-load, it leverages native 800 V DC input to streamline power conversion and reduce the need for intermediate AC stages. That improves energy efficiency while simplifying infrastructure design, allowing for denser deployments and faster scalability within the same data center footprint.
A2: Proximity matters. With advanced manufacturing facilities in 30 countries, we enable customer regionalization strategies while providing the local expertise and global scale needed to drive competitive advantage. Customers benefit from capabilities and expertise that allow them to shorten the distance between manufacturing and deployment, speeding time to compute — a critical ROI metric for data center operators and investors — reducing their carbon footprint, and enhancing scalability within and between facilities. It also accelerates the delivery of services, from design and engineering support through deployment, installation, refurbishment, and recycling.
Depending on the engagement, operational efficiency may take the form of faster prototyping, reduced downtime, agile deployment, or post-sale value capture, among myriad other benefits. A mosaic of manufacturing facilities across geographies also enhances supply chain resilience, enabling customers to better navigate geopolitical uncertainties, shifting demand, labor shortages, and unforeseeable disruptions. Flex’s manufacturing capacity enables us to meet the insatiable demand for embedded and critical power solutions — not to mention cooling solutions and essential infrastructure such as racks and enclosures — while delivering tangible operational efficiency gains for data center customers worldwide.
A3: With data center cooling needs surpassing what can be accomplished with traditional air-cooling systems, liquid cooling has become the go-to choice for dealing with the excessive heat produced by the power-hungry AI and HPC workloads in high-density compute environments. Leveraging our direct-to-chip cooling technology, data center customers enable zero water consumption, over 50 percent decrease in cooling power usage, and an 18 percent decrease in total power consumption, preventing an annual emission of 35 million metric tons of CO2 with widespread adoption.
While AI receives the lion’s share of attention, it’s important to remember that it still accounts for just 14 percent of global data center power usage — which means that the vast majority of data center space is dedicated to CPU-based workloads. To that end, we’re also at the forefront of developing innovative cooling solutions that deliver immediate performance and efficiency improvements without requiring any changes to data center infrastructure. For instance, the standalone JetCool SmartPlate™ System, designed to simplify the adoption of liquid cooling, eliminates the need for facility water — not a priority in air-cooled environments — while delivering an average total IT power savings of 15 percent, enabling customers to maximize compute in power-constrained environments.
A4: With 1+ MW racks on the horizon, data center operators are rethinking their architectures as power and thermal management requirements escalate. Today, power, cooling, and servers are often fully integrated within the same rack, an innovation that has served the industry well. However, not doing so can (perhaps paradoxically) ease space constraints while imparting a host of other benefits. While power and cooling may be disaggregated into separate “sidecar” racks to increase compute capacity in the IT rack, this still requires an integrated, seamless interplay of systems from grid to chip to extract maximum value.
For instance, a beefier 4,000-amp busbar feeding power into a reconfigured data hall with an end-of-row cooling distribution unit (CDU) can accommodate high-density IT racks and elevate the power architecture to 400V. Flanking the IT racks with standalone power cabinets and CDUs not only increases the amount of space in the IT rack dedicated to compute, it opens up the data hall floorspace considerably. Furthermore, the new configuration can improve system efficiency by about 20 percent — which translates into significant energy savings annually per rack. In large data centers with thousands of racks, the potential savings are clear. Data center operators are looking for partners that have the ability to design and manufacture complete solutions and deploy them at scale worldwide.
A5: Traditionally, converting incoming AC power to a DC voltage usable at the chip level requires several conversion steps, each of which impacts energy efficiency. But we’re seeing higher DC voltages emerge in the data center, including the 800 V DC that allows direct connection to renewable energy systems and +/- 400 V DC required for the integration of battery energy storage systems (BESS) and microgrid applications.
Condensing power conversion into a single solid-state transformer not only produces efficiency gains, it reduces the square footage required for electrical rooms significantly — by some estimates, up to 90 percent by 2030. This opens up new paths to profitability: saving on construction costs when capacity can be met with less space or increasing compute capacity in the existing envelop by adding more racks. We call this the convergence of power and IT, and it is a welcome step forward.

Meet Vivek Venkatesan, one of TechArena’s newest Voices of Innovation. Vivek is a lead data engineer at Vanguard and a senior member of IEEE with 15+ years of experience in data engineering, cloud architecture and applied AI.
I sat down with Vivek to better understand his journey in tech and his unique contribution to the global data and AI community.
A1: I started as a front-end developer on a banking product and was sent to Botswana for an implementation. A critical issue with incorrect ATM transaction data sparked my lifelong interest in data. Since then, I’ve worked across banking, health insurance, healthcare, and financial services. From leading a COVID-19 contact tracing system that protected healthcare workers to cutting wasted cloud costs while enabling AI pipelines, my journey has been about making data both impactful and human-centered. During the pandemic, it was never just numbers on a dashboard, it was lives and livelihoods.
That perspective of empathy continues to shape how I build data systems today.
A2: The Botswana assignment. What seemed like a small data issue, a misreported ATM transaction, showed me how profoundly even tiny errors can affect human lives. That moment pushed me to commit to building systems with integrity, resilience, and accountability. It taught me that data is never just technical; it is deeply human.
A3: In the beginning, I thought innovation meant adopting the newest tool or framework. Over time, I have come to see it as the art of solving real-world problems responsibly and at scale. Sometimes innovation is a bold architectural shift. Other times, it is a simple-but-overlooked fix that unlocks trust and adoption.
To me, true innovation blends novelty with empathy, sustainability, and measurable impact.
A4: Federated and privacy-preserving AI. Organizations need to learn collectively while still protecting sensitive data. This technology allows collaboration across boundaries without compromising privacy. I believe it will be a foundation for scaling AI responsibly in industries where trust and compliance matter most.
A5: I use three filters:
Does it solve a real-world problem?
Can it scale sustainably, financially, technically, and ethically?
Does it lay a foundation for future growth?
If an idea does not meet these, it is usually hype.
A6: That faster automatically means better. In my experience, the innovations that last are built on credibility and trust. A system that people rely on day after day, even quietly, is often more innovative than the flashy tool that grabs headlines and disappears.
A7: They are collaborators. AI can handle repetitive tasks and surface insights quickly, but it is human creativity that frames the right questions and applies judgment. AI amplifies human ingenuity; it does not replace it.
A8: Bridging the trust gap. We already have incredible technology, but adoption often falters when people do not trust it. Building systems that are transparent, reliable, and empathetic to end users will determine whether innovation succeeds.
A9: Photography. Just as I frame a cityscape or a moonrise, in data I try to frame problems from the right perspective. Photography teaches patience, pattern recognition, and the ability to see both details and the bigger picture. These are skills I rely on when solving complex data challenges.
A10: I am excited to share lessons from real-world challenges and to learn from peers who are pushing boundaries in different domains. I hope the audience takes away that innovation is not just about tools; it is about solving problems with empathy, trust, and scale in mind.
A11: Nikola Tesla. I would ask how he balanced bold imagination with the realities of adoption. That tension between radical ideas and practical acceptance is still the defining challenge of innovation today.

In this episode of In the Arena, David Glick, SVP at Walmart, shares how one of the world’s largest enterprises is fostering rapid AI innovation and empowering engineers to transform retail.

Haseeb Budhani, Co-Founder of Rafay, shares how his team is helping enterprises scale AI infrastructure across the globe, and why he believes we’re still in the early innings of adoption.

Leading up to the 2025 OCP Global Summit, I sat down with Ben Sutton, product marketing manager at CoolIT Systems, to discuss how direct liquid cooling (DLC) is changing the way companies scale AI and HPC.
We covered where adoption stands across hyperscale, colo, and enterprise, the practical tradeoffs with immersion, and the design choices that drive performance and reliability.
For readers new to CoolIT, the company is a 24-year pioneer in scalable DLC for high-density compute, with technology cooling 5M+ GPUs and CPUs globally. Partnering with leading processor and server makers, CoolIT’s modular DLC solutions boost rack density, performance, and power efficiency for AI, HPC, and enterprise data centers.
Check out our conversation:
A: Cooling the latest processors has become a serious challenge. With air cooling, that means bigger heat sinks, faster airflow, and colder air temperatures. That leads to three problems. First, density decreases because each server takes up more rack space. Second, fans draw more and more power to move the required air. Third, HVAC systems must run harder and longer to reduce the data center’s ambient air temperature.
DLC takes a different approach. It targets only the components that dissipate the most heat, mainly the processors, and leaves peripheral components to the ambient air. By focusing on the source of the heat, liquid cooling cuts energy use in both fans and facility cooling.
The reason it works so effectively is simple physics. Liquids absorb heat far more efficiently than air. Air is an insulator. Water, for example, can store about 4,000 times more heat in a given volume. This is what drives the dramatic improvement in energy efficiency and PUE when liquid cooling is deployed.
When we scale this up to a system level, the benefits compound. A single coolant distribution unit (CDU) with two pumps can eliminate the need for a massive volume of airflow, drastically reducing the power consumption required for cooling. These benefits increase further when liquid cooling is extended to peripheral components such as DIMMs (memory) and OSFP (Octal Small Form Factor Pluggable) modules. Together, these can deliver a PUE as low as 1.02, which is an ideal outcome for modern data centers.
A: AI has brought liquid cooling into the mainstream. HPC was an early adopter, but now AI workloads are pushing densities and thermal loads beyond the limits of air cooling. NVIDIA’s latest platforms, like Blackwell and Blackwell Ultra, require liquid cooling to handle their high power draw and heat dissipation. That has made liquid cooling a necessity for cutting-edge compute environments globally.
End customers are approaching this in different ways. Hyperscaler cloud service providers have the knowledge, engineering experience and ownership stake to move fast. Due to their scale, hyperscalers’ energy efficiency gains will have the greatest effect. Colocation providers are following closely—many lease space to hyperscalers and need to match their cooling capabilities.
Enterprise data centers are lagging in the adoption of liquid cooling. This is because of the lower-power nature of their applications and also because they can often tap into cloud-based AI rather than in-house infrastructure. However, looking ahead, we see that, by 2028, general-purpose data center CPUs are expected to reach 500—600 W TDP. That level of heat will demand high-performance liquid cooling just as AI GPUs do today. This parallel trend means DLC will no longer be limited to AI accelerators. General compute will require it, too, making thermal strategy a central design factor for the next generation of data center infrastructure.
A: On paper, the physics behind immersion looks very promising, but in practice, there are hurdles.
Data centers today are set up for air cooling. DLC provides the opportunity for a hybrid approach, targeting the highest-TDP components in each server. You can keep using standard rack-based infrastructure, so both new builds and retrofits are straightforward. Because the liquid is fully contained inside the DLC system, server maintenance is simple.
Immersion, on the other hand, can cool all components, but it demands specialized tanks and infrastructure. Maintenance becomes more complex because a server must be lifted out of the fluid and drained before work can begin. Servers also need to be certified for immersion, and manufacturers often need to adjust designs to avoid chemical reactions or degradation when components contact the fluid.
At the system level, cost and ease of installation are the deciding factors. Immersion requires large volumes of expensive dielectric fluids, custom-certified servers, and new tank infrastructure, often in horizontal configurations. That adds significant cost for the owner. DLC is simpler to design and install, is more cost-effective, and does not require redesigned data center infrastructure.
Single-phase DLC is already mature and is being adopted at scale. Immersion cooling is earlier in its adoption cycle and still faces technical and operational challenges before it can be broadly deployed.
A: Cooling can account for 30% to 40% of total energy use in a conventionally air-cooled data center. By moving to liquid cooling, operators see immediate reductions in operational costs because they can cut back on mechanical cooling. In practice, this translates to at least a 10% drop in energy bills, and often more, depending on workload intensity and local climate. Those savings compound over time and support operators in meeting energy efficiency and sustainability targets while increasing compute.
Capital expenditure is also reduced, and not just on cooling hardware. Air cooling consumes space, both within the rack and across the facility. As heat loads rise, air-cooled servers require larger heat sinks, more airflow and lower inlet temperatures. This drives up the size and cost of the entire data center footprint. Liquid cooling removes this constraint by supporting much higher rack density.
With single-phase DLC, operators can run up to five times more power per rack—50 to 100 kW compared to 15 to 30 kW with air cooling. That allows them to scale compute capacity dramatically within the same physical footprint. For new builds, it reduces land and construction costs. For existing sites, it lets operators expand capacity without expanding the facility.
When you combine energy efficiency with space efficiency, the benefits multiply. Lower power bills reduce OPEX, while higher density reduces CAPEX tied to construction, land and power distribution infrastructure. Together, these effects make liquid cooling one of the most effective levers available to control both operating and capital costs in the era of AI-scale compute.
A: CoolIT has long been known for our strong product offering, particularly around coldplates that feature our IP. With the ever-growing demands for TDP, heat flux and also lower maximum junction temperatures, coldplate performance is critical to the performance of the whole system. Innovative designs, such as ours, offer lower thermal resistance and lower pressure drop. This enables the adoption of not only today’s processors but also tomorrow’s. They also provide reliable temperature uniformity across the processor, allowing it to operate more efficiently under higher loads. Plus, the lifetime of the processor is increased when the silicon is operating within ideal temperature ranges.
We continue to collaborate with our customers for production programs, but also for R&D, as it is seen as a critical function of what we do, and therefore, it garners continued investment.

Direct from AI Infra 2025, AI Expert & Author Daniel Wu shares how organizations build trustworthy systems—bridging academia and industry with governance and security for lasting impact.

We sat down with TechArena Voice of Innovation Robert Bielby to better understand his journey in tech and in life. With nearly four decades of experience spanning hardware engineering, product strategy, and corporate leadership, Robert has witnessed firsthand the cycles of hype, innovation, and reinvention that define our industry.
In this conversation, he shares not only the lessons of a long career, but also candid perspectives on AI, automotive technology, and what it takes to stay relevant in a fast-moving world.
A1: I’ve held a wide range of roles during my nearly 40-year career in tech. From a young age, I was fascinated with electronics—always taking things apart to see how they worked (though not always able to put them back together). By the time I was a teenager, I’d become the neighborhood TV repairperson, making house calls for demanding neighbors who rarely paid but never hesitated to critique the “too green” or “too red” colors on their screens.
Before my formal career as a hardware engineer and system architect for digital magnetic instrumentation recorders, I worked as an electronic repairperson and technician. After eight years as a hardware designer and architect, my trajectory shifted. Over time, I took on roles in product definition, system architecture, product and corporate marketing, strategy, and P&L leadership for semiconductor companies across memory, ASICs, programmable logic, and AI. My early grounding in hands-on repair and design has served me throughout my career.
A2: The turning point was moving from hardware design to product definition—a role that reported to marketing. For a long time, I resented what felt like going over to the “dark side” compared to designing hardware that made motors spin and lights flash.
Eventually, I came to appreciate that marketing and product definition were just as critical as hardware. Building a sustainable business requires not only great products but also the ability to define, position, and sell them. I realized it was important to reinvent myself continually to stay relevant in tech’s fast-moving landscape. Andy Grove’s book Only the Paranoid Survive became, and remains, a mantra for me.
A3: Innovation often looks like an alternative solution that wasn’t previously viable. Advances in technology and investment change what’s possible. A good example is FPGAs. Academic papers outlined their potential long before the first devices existed, but they only became viable when Moore’s Law drove down transistor costs. Suddenly, FPGAs became a real alternative to ASICs, and their applications grew dramatically.
AI has followed a similar path. Once considered an academic curiosity because of extreme compute and transistor requirements, AI is now mainstream because advances in performance, integration, and cost reductions made it practical.
A4: Quantum computing doesn’t have the same broad public awareness as AI, but its impact will be profound. The greatest concern is security. Experts warn of “harvest now, decrypt later” attacks—where today’s encrypted data is stored until quantum computers are powerful enough to break it.
Addressing post-quantum cryptography quickly, with an emphasis on crypto agility, is essential. The implications for global infrastructure and transactions can’t be overstated.
A5: I look at innovations through a financial and risk lens. Key questions include:
1. What problem is this solving?
2. How pervasive is the problem?
3. What applications will it impact?
4. What current solution will it displace?
5. Are the benefits significant enough to drive adoption?
6. What are the risks, and is there a credible plan to mitigate them?
Anyone seeking funding should be ready to answer those questions along with financial projections.
A6: Cool innovations that have lots of hype rarely translate into the level of success that was originally projected. Both technical and market viability are essential components of success.
During a technology bubble, a lot of “funny money” is invested in companies focused on the hotly hyped innovations by investors because of FOMO (Fear of Missing Out). The bigger the hype, the greater the amount invested. The Gartner Technology Hype Cycle does a great job of tracking the lifecycle of innovations over time. Many of the innovations once projected to be game changing with $Bs invested regularly fall completely off the chart due to a plethora of unforeseen reasons. In short, there’s rarely something that proves to be a sure bet. More innovations fail than succeed. Those that do succeed typically underperform compared to the original expectations.
A7: I view AI as a tool, much like Excel. It can dramatically accelerate creativity, but it doesn’t replace it. Because AI is trained on existing datasets, it isn’t inherently creative—it recombines what already exists.
There’s a thin line, as seen in the music industry. Vanilla Ice’s “Ice Ice Baby” cost $4 million in a settlement because of its similarity to “Under Pressure.” That wasn’t creativity, it was copying. AI sits in that same gray area. Ultimately, though, AI will be a collaborator, not a competitor.
A8: The runaway demand for power and cooling driven by AI computation. We’re at the point where utilities consider bringing decommissioned nuclear plants back online just to support data center growth. That should be a wake-up call.
The ecological consequences of unchecked energy demand could be irreversible. The industry must prioritize architectural changes in data centers to reduce power consumption and cooling needs.
A9: Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World by Jill Jonnes.
The book chronicles the electrification of America from public fear to technical and ethical debates. What struck me most was Edison’s shift in morals. For years, he opposed using electricity for capital punishment, calling it a misuse. But after losing market share to Westinghouse, Edison reversed course, campaigning aggressively to associate AC power with death—even staging animal electrocutions to sway public opinion.
It’s a fascinating reminder of how business pressures can override ethics, and how technology, business, and morality are intertwined. Today, we take electricity for granted, but it transformed the world in just a few decades.
A10: I walk away. Letting a problem marinate almost always leads to the solution. Sometimes the answer comes quickly; once it took two months and arrived out of nowhere while I was riding my bike. Stepping back works better than grinding endlessly.
A11: In my case, technology is my hobby. I maintain a well-outfitted lab where I design and test high-end audio equipment, particularly analog systems. I enjoy exploring different topologies because simple designs often prove more difficult—yet yield the best sound.
The internet has made schematics and design discussions widely accessible, which keeps me constantly learning and reverse-engineering ideas.
Staying focused on the art of design continues to provide me with inspiration for my professional work, which requires that I stay on top of technology and trends and do so at a very deep and detailed level. To a large extent, the beauty of hardware design is that it keeps you honest: it either works or it doesn’t, it either sounds good or it doesn’t. When it’s done right, music through high-end gear gives you goosebumps—the closest thing I know to magic. That pursuit of clarity and elegance fuels my professional work as much as my personal passion.
A12: Working with the TechArena staff has been nothing short of an absolute pleasure. Allyson has done a phenomenal job of building a platform from the ground-up with a great staff that has achieved an incredibly wide market reach and awareness. The opportunity to be associated with such a platform and recognized as an industry voice of automotive is both an honor and a privilege.
What I am looking to achieve in every blog post is to impart an understanding of the technology directions within the automotive industry in a way that is understandable and accessible. I enjoy teaching. The current level of innovation currently ongoing in the automotive world is mind blowing, and the general layperson on the street probably has little to no idea as to the level of technology that exists in their car and where it is headed.
Tracking this space and helping to educate on this topic is both a privilege and highly rewarding. In truth, it’s my perspective that most of the auto industry doesn’t know where it’s all headed, but what is clear is that standing still or continuing to do more of the same will be an auto company’s eventual demise.

I sat down with Mark Grodzinsky, one of TechArena’s newest voices of innovation, to discover more about his journey and what drives him. A product and market leader who spent his career at the intersection of semiconductors, wireless, SaaS, and AI, Mark has held roles from early startups to Fortune 500 leadership, building ecosystems that turned Wi-Fi, IoT, and cloud innovations into global platforms. Check out the conversation.
1. Can you tell us a bit about your journey in tech?
I started my post-business school career at Mobilian, a wireless startup working on early Wi-Fi and Bluetooth technologies that was later acquired into Intel. This was an experience that set the tone for my journey as a pioneer in emerging technologies. From the beginning, I’ve been drawn to opportunities where the challenge wasn’t just building a product, but creating an ecosystem around it so innovation could scale.
That pattern has repeated throughout my career—from helping establish global Wi-Fi and WiGig standards, to leading startups acquired into Fortune 500s, to shaping categories in semiconductors and networking. At Ruckus, I helped transform a hardware business into SaaS and built an IoT unit from the ground up. Most recently, I’ve been focused on applying AI to reshape network observability in data centers, bringing clarity and value to an increasingly complex digital world.
The common thread is a passion for innovating at inflection points, telling the story in a way that resonates, and building the markets and partnerships that make technology matter.
2. Looking back at your career path, what’s been the most unexpected turn that ended up shaping who you are today?
The biggest surprises came from chance encounters. Coming out of MIT with electrical engineering degrees, I expected to work as an engineer, but a lunch with a fraternity alumnus led me to Motorola’s semiconductor rotation program. That gave me a front-row seat to how chips are actually built and sparked my curiosity about how technology moves from lab to market — which pushed me toward business school.
Another stroke of serendipity came during my MBA. I took a summer internship at a small Austin startup, Silicon Labs, because I wanted to be near my girlfriend at the time. I worked on a project evaluating whether to enter the nascent Bluetooth market. We ultimately passed, but in those early days of wireless, even being in the conversation made me a “wireless expert.” Just as importantly, that internship introduced me to a lifelong mentor who’d guided me throughout my career.That relationship was as pivotal as the technical experience itself.
That girlfriend-inspired internship, an invaluable mentor, and “accidental” expertise in wireless ended up setting me on a 30-year path as a pioneer in Wi-Fi, IoT, SaaS, and now AI. It taught me that careers aren’t always well planned and linear — sometimes, being open to unexpected turns is what leads to the most meaningful journeys.
3. When you’re evaluating new ideas or technologies, what’s your framework for separating genuine innovation from hype?
I start with a simple question: what real value does this create for customers or the market? Too often, emerging concepts are wrapped in buzzwords that sound impressive but don’t deliver meaningful outcomes. I try to separate the hype from the substance by asking: does this innovation measurably improve a customer’s business, experience, or efficiency compared to yesterday?
That value can take many forms — financial return, competitive differentiation, a new way of working, or even a shift in how an industry thinks about a problem. But there has to be something tangible that makes customers’ lives better. White papers and technical jargon don’t qualify on their own. For me, genuine innovation is defined not by the novelty of the technology itself, but by the impact it has in the real world.
4. What's the biggest misconception you encounter about innovation in the tech industry?
That the best idea automatically wins. Success usually depends on a lot more than just the brilliance of the concept.
Turning innovation into real impact requires the right product–market fit, timing, and sometimes even luck. It depends on whether the ecosystem is ready to support it, whether competitors and collaborators align, and whether the economics make sense — from the cost to develop, to the cost to scale, to the potential disruption for established players.
Truly transformative ideas do occasionally break through any obstacle, but most successful innovations aren’t just about the idea itself. They emerge from the convergence of market need, timing, product alignment, ecosystem readiness, and cost. For me, that makes innovation even more fascinating: it’s not just about invention, it’s about orchestration.
5. What's a book, podcast, or idea that fundamentally changed how you think about technology or business?
The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb. The core idea is that the most important events are also the least predictable — and instead of trying to forecast them, the best strategy is to build robustness and be ready to seize opportunity when they arrive.
The timing of that book’s release in 2007 was remarkable. That same year, the iPhone debuted, fundamentally reshaping how the world thought about computing. At the time, I was at Wilocity, a startup pioneering 60 GHz wireless technology. We had written our business plan through a traditional computing lens, but within six months, we had to completely rewrite it because the world had shifted overnight.
That experience taught me firsthand what Taleb described: industries are often reshaped by Black Swan events, and even those closest to the innovation don’t always grasp its ultimate impact. From Wi-Fi to the Internet, from mobile phones to cloud computing, and now AI, the biggest changes are often the ones we least expect. That’s why I believe the most exciting work is building resilient systems and strategies that not only withstand disruption, but thrive because of it.
6. When you're facing a particularly complex problem, what’s your go-to method for finding clarity?
When I’m facing a complex problem, my instinct is to break it down into its simplest components. It’s something I even teach my kids when they get stuck on math word problems: strip away the extra words and ask, what is this sentence actually telling us? Once you separate what’s essential from what’s noise, the underlying issues are usually far more straightforward than they first appear.
I apply the same approach in business. By isolating the core questions and tackling them one by one, complexity becomes manageable. And once those smaller pieces are clarified, you can stitch them back together into a solution that addresses the whole problem with a clearer sense of purpose.
7. Outside of technology, what hobby or interest gives you the most inspiration for your professional work?
Outside of technology, two passions have shaped how I think about leadership and teamwork: soccer and music. I played competitive soccer through college and beyond, and I also trained as a classical percussionist, performing in orchestras for many years. Both disciplines demand relentless individual practice and mastery, but also the humility to integrate seamlessly into a larger whole.
In soccer, no matter how skilled you are individually, success depends on whether the team plays as a cohesive unit. In an orchestra, the same holds true — your part must be precise, but it must also harmonize with every other instrument. Both pursuits have taught me the balance between striving for personal excellence and ensuring that excellence contributes to collective success. For me, greatness isn’t defined by the soloist or the star striker, but by how well the group performs together. That philosophy has guided the way I lead teams in business: pushing for the highest standards individually, while never losing sight of the collective responsibility to deliver as one.
8. What excites you most about joining the TechArena community, and what do you hope our audience will take away from your insights?
What excites me most about joining the TechArena community is being around people who are doing cool new things — or taking existing things and finding cool new ways to do them. I’m a lifelong learner, and what draws me to emerging markets is exactly that sense of discovery. Stumbling onto Wi-Fi early in my career met two of my most important professional and emotional needs: it was new, and it was cool. I’ll never forget plugging a Wi-Fi card into my laptop and suddenly being online — mind blown.
That’s the energy I get from TechArena. I’ve always thrived more in a room full of people innovating together at a whiteboard than sitting alone with a problem. This community feels like that room — buzzing with ideas, energy, and collaboration.
What I hope the audience takes away from my insights is not just my experiences in wireless, IoT, SaaS, or AI, but the bigger pattern: how to spot inflection points, how to build markets around technology, how to tell their stories, and how to create ecosystems that last. And hopefully, I can also share a few “that’s so cool” moments along the way.
9. If you could have dinner with any innovator from history, who would it be and what would you ask them?
Ludwig van Beethoven — the greatest musical composer who ever lived. I would ask him how important his physical hearing was to his ability to compose. My hypothesis is that Beethoven always “heard” the music in his mind — the phrases, harmonies, and orchestrations — whether he could physically hear them or not.
It’s remarkable that many of his greatest works were written after he had lost much of his hearing. Perhaps he relied on that sensory input early in his career to shape his sound, but later, his imagination took over and he was essentially transcribing the symphonies that already existed in his head. I’d love to understand how he bridged the gap between the physical act of hearing and the creative act of composing, because it speaks to the essence of innovation: envisioning something that doesn’t yet exist and bringing it into the world.
I’ve always been inspired by creators — composers, architects, inventors — those who imagine and build. Many can perform, but it’s a different skill to create something new from nothing. That act of creation is what I admire most.

Recorded at AI Infra Summit 2025 in Santa Clara: Carrier Chief Data & AI Officer Arun Nandi on infra as AI’s backbone, how early adopters win on ROI and speed, and what changed in the last 12–24 months.

Meet Tannu Jiwnani—one of TechArena's newest voices of innovation. Tannu is a a cybersecurity and identity leader who believes the future of innovation is responsible, secure, and inclusive.
We sat down with her to chat about security-by-design, blending AI with human oversight and creativity, why closing the talent and diversity gap is critical, and so much more. Check it out:
A: I began my career after graduate school in Florida, first as a business analyst and then as a data analyst focused on anti–money laundering. That experience was my introduction to fraud detection and prevention. My passion for problem-solving led me to pursue a master’s degree in Information Systems and Operations Management, which became the foundation of my career. Over the years, I have specialized in cybersecurity and identity protection, leading high-impact initiatives such as incident response to major cyber attacks and modernizing identity systems that safeguard millions of users. Beyond the technical work, I have made it a priority to mentor women and underrepresented groups, because I believe visibility creates possibility and I want others to see that they belong in cybersecurity too.

A: I never set out to build a career in cybersecurity. I was always an engineer at heart, and later a business school graduate focused on process, operations, and efficiency. For a long time, I thought my path would stay in those lanes. The most unexpected turn came when I landed in cybersecurity without prior experience or even much exposure. I still remember sitting in my first few meetings, listening to discussions full of technical jargon that didn’t make sense to me at the time. Instead of feeling defeated, I realized that this challenge was an opportunity to start from scratch and embrace the joy of learning on the job.
That experience taught me something profound about myself: I could thrive in unfamiliar territory if I was willing to be curious, ask questions, and stay persistent. It reshaped my confidence, showing me that expertise is built through resilience and openness, not by knowing everything on day one. It also made me appreciate the broader impact of cybersecurity, how the systems we protect touch millions of lives. Looking back, that unexpected leap into a field I never planned for became the defining moment that shaped my career, my leadership style, and my passion for making cybersecurity more inclusive for others who may not see themselves in it yet.
A: For me, innovation is no longer just about creating something new. In today’s landscape, it is about creating something that is both impactful and responsible. When I began my career, I often thought of innovation in terms of speed, disruption, or the next big breakthrough. Over the years, I have seen that true innovation lies in solving meaningful problems, making technology more secure, and ensuring it is accessible to the people who need it most.
In cybersecurity, innovation is not only about staying ahead of cyber attacks, but also about designing systems that people can trust and use safely. My definition has shifted from a focus on novelty to a focus on sustainability, accountability, and long-term value. Innovation today is about building solutions that stand the test of time and make a positive difference across industries and communities.
A: Right now, the relationship between AI advancement and human creativity is a bit of a mixed bag that we are all still exploring. There is clearly a learning curve as we figure out how to use AI responsibly and effectively, and that means being mindful of both its strengths and its limitations. AI can accelerate what we do and open up new possibilities, but it also requires human oversight to ensure that outcomes are ethical, accurate, and truly innovative. I believe the future will not be about choosing between AI and human creativity, but about learning how to blend the two in ways that amplify our potential while keeping accountability at the center.
A: If I could solve one major challenge in the tech industry today, it would be closing the talent and diversity gap. Despite all the progress we have made, there are still too many barriers that keep women, people from underserved communities, and nontraditional backgrounds from thriving in tech. We talk a lot about innovation and security, but without diverse perspectives at the table, we miss out on creative solutions and introduce blind spots into our systems.
Addressing this challenge is not just about hiring, it is about building inclusive pipelines, mentorship networks, and workplace cultures where people can grow and feel they belong. If we can solve this, the entire industry becomes stronger, more resilient, and better equipped to create technology that works for everyone.
A: Professionally, one idea that fundamentally changed how I think about technology is that security is not just a feature, it is a foundation. Early in my career, I saw security as something that came after innovation, a layer to protect what was already built. Over time, I realized that the most resilient and impactful technologies are designed with security embedded from the very beginning. That shift has shaped how I approach every project, from identity protection to incident response. It also reframed how I think about business: security is not just about defense; it is about trust. When people trust the systems they use, adoption grows, opportunities expand, and innovation can truly thrive.
More recently, Atomic Habits by James Clear has been surprisingly practical for me. I know it is often talked about as a “hyped” book, but in moments when I am stretched thin, its focus on small, consistent actions helps me re-center and stay on track. It has been a reminder that progress often comes from building sustainable habits rather than relying on bursts of motivation, and that mindset has been invaluable in both my personal growth and professional resilience.
A: Outside of technology, planting and working out give me some of my greatest inspiration. Planting reminds me that growth requires patience, care, and the right conditions—lessons that influence how I think about nurturing teams and long-term strategies in cybersecurity. Working out gives me resilience and discipline. It reinforces the importance of consistency, pushing through challenges, and showing up even when it is difficult. Together, these hobbies keep me grounded, balanced, and focused; while also strengthening the mindset I bring into high-pressure situations at work.
A: What excites me most about joining the TechArena community is the opportunity to connect with people who are just as passionate about technology as they are about its impact on the world. Communities like this create space for dialogue, collaboration, and fresh perspectives, which is where true innovation thrives. I am particularly excited to share insights from my journey in cybersecurity and identity protection, while also learning from others who bring different experiences and expertise.
A: If I could have dinner with any innovator from history, it would be Katherine Johnson. I first learned about her through the film Hidden Figures, and her story left a lasting impression on me. She not only shaped one of the most important moments in history by helping put humans on the moon, but she did so while breaking barriers of gender and race in a time when her presence in those rooms was questioned. I would ask her how she found the confidence to keep speaking up when she was often the only one of her kind at the table, and how she balanced the weight of that responsibility with the joy of doing groundbreaking work. Her courage, brilliance, and persistence continue to inspire me, especially as I think about what it means to show up fully in spaces where you may not always feel you belong.

During Yotta 2025, I had a chance to sit down with Joe Reele, vice president – solution architects at Schneider Electric, to chat about the company’s new Prefabricated Modular EcoStruxure™ Pod Data Center, built in partnership with Compass Datacenters.
Designed to simplify the notoriously complex white space build-out process by delivering a factory-tested, ready-to-install pod that integrates power, cooling, and IT networking into a single modular unit.
“This is about delivering resilience, sustainability, and speed in a world where clients can’t afford to wait,” Joe said. “Prefabrication is the next step in that journey—and it’s only the beginning.”
Joe pointed to three pressures shaping customer expectations: speed, simplicity, and consistency.
“The market really drove us this way,” he said. “Clients need facilities that are delivered faster, at lower cost, and with no risk—while ensuring performance is predictable and repeatable. Meeting those demands is what led us to rethink how white space is designed and deployed.”
The goal isn’t just speed, but repeatability at scale. As he noted, “Low cost only matters until you have a major incident. And when clients have hundreds of data centers, the last thing they want is 100 different one-off designs.”
The pod is delivered as a standardized base unit, but with options baked in for cooling architectures, cabling, and power distribution. That balance between uniformity and flexibility was deliberate.
“It’s like the Ford F-150,” Joe said. “They have one chassis, but 35 different models—from basic to luxury. We’ve baked adaptability into the base design, so when a client comes with a request, the answer becomes much easier.”
Sustainability factored into the design process as much as speed. Schneider Electric and Compass have emphasized reducing embedded carbon in packaging and shipping, as well as eliminating waste in installation.
“When we say we are serious about sustainability, we mean that from how we make and produce our product, to what earth minerals we’re using, to how much energy it takes to build it,” Joe said. “Over the years, we’ve taken more and more carbon out of packaging, shipping, and manufacturing. Each piece may seem small, but together they add up.”
He also acknowledged the challenge customers face in balancing growth with net-zero commitments: “Our clients’ growth is going through the roof, but they also have aggressive net-zero goals. Prefabrication helps them scale while keeping sustainability in focus.”
Compass and Schneider have collaborated for years, and Joe pointed to that history as a key factor in making prefabrication viable.
“You can’t do this kind of work without trust,” he said. “The Compass team gave us the opportunity to earn that trust, and that’s been essential. Both companies check egos at the door and focus on solving problems together. That’s how you move from concept to reality.”
Joe believes prefabrication will become the industry standard for white space fit-outs, but he also sees a larger shift on the horizon: software-driven integration.
“The next step is software—the digital thread,” he said. “When power, cooling, and IT are stitched together on one network, data centers can move toward autonomous operations. And when that happens, data centers won’t just consume power—they’ll help stabilize the grid. Mark my words: the data center will become the great grid stabilizer of the world.”
Prefabricated white space isn’t new—making it a first-class, configurable product is the shift worth watching. From an operator’s perspective, the appeal is schedule determinism, repeatability, and risk reduction, with sustainability layered in. The open questions we’ll track:
How well density envelopes and serviceability hold up in production
Whether lifecycle carbon reductions materialize versus conventional builds
How quickly the “digital thread” vision linking IT and OT becomes real

Allyson Klein hosts Manu Fontaine (Hushmesh) and Jason Rogers (Invary) to unpack TEEs, attestation, and how confidential computing is moving from pilots to real deployments across data center and edge.