
For the past two years, enterprise AI postmortems have sounded the same. A pilot stalls. Results look inconsistent. Trust erodes. The verdict follows quickly: the model is immature, the tools are unstable, the technology moved too fast.
That explanation is convenient. It is also wrong.
AI did not introduce fragility into enterprise data platforms. It exposed what was already there.Long before large models showed up, many platforms were held together by undocumented assumptions, fragile transformations, and ownership gaps everyone learned to work around. AI did not break those systems. It removed the ability to ignore their weaknesses.
What teams are facing is not an AI failure. It is a systems reckoning.
Data debt is often framed as bad quality or missing fields. That framing misses the point. The real debt is structural. It lives in pipelines no one fully owns, logic that exists only in people’s heads, and transformations that accumulated over years without a clear contract.
Traditional analytics could tolerate this. Dashboards aggregate. Reports smooth over inconsistencies. When something looks off, an analyst adjusts a filter or adds a footnote. Time absorbs the problem.
AI does not.
AI pipelines pull from multiple sources, assemble context, and produce outputs that appear authoritative. Every hidden assumption becomes an input. Every undocumented rule becomes a risk. Every unclear boundary becomes a debugging exercise with no obvious owner.
Consider a familiar enterprise pattern. A customer dimension evolves over a decade. Marketing owns part of it. Finance applies overrides. Operations enrich it downstream. No one owns it end to end. Queries reference it through layers of views. The system works because people know where it breaks.
Introduce an AI system that needs customer context in near real time. The cracks surface immediately. Conflicting attributes. Missing lineage. Output shifts no one can explain. The AI did not create the inconsistency. It forced it into the open.
This matters because AI compresses feedback loops. Issues that once took quarters to surface now appear in days. What used to be background noise becomes a blocking problem. Debt that was once survivable becomes operationally expensive.
This is a well-understood pattern in data platform maturity discussions: when assumptions aren’t explicit, systems fail under new latency, reliability, and trust requirements.
Trust is the currency of AI systems. Without it, outputs are questioned, bypassed, or quietly ignored. Trust does not come from model accuracy alone. It comes from traceability.
When an AI output is challenged, the first question is rarely about hyperparameters. It is about provenance. Where did this data come from. Why does it say this. What changed since yesterday.
Lineage answers those questions. Ownership makes the answers actionable.
This is not about governance theater or compliance checklists. It is about operational clarity. Who owns this dataset? What assumptions does it encode? Who signs off when it changes?
This is also where many enterprise AI efforts stall: trust breaks when teams can’t answer provenance questions consistently.
In practice, that means contracts, tests, and change management around critical datasets—not just documentation.
Dashboards could survive ambiguity because they were passive. AI systems are not. They summarize, recommend, and influence decisions in real time. That shift raises the bar.
A report could be wrong for weeks with limited impact. An AI recommendation can trigger action immediately. Confidence must extend beyond the output to the system that produced it.
Many platforms struggle here because clarity was deferred. Storage scaled. Compute scaled. Understanding did not. The result is a technically impressive platform that no one can fully explain. AI makes that state unsustainable.
Teams that treat lineage and ownership as first-class concerns move faster, not slower. They spend less time debating what the system is doing and more time improving it.
Another common complaint about AI is cost. Training runs are expensive. Inference adds up. Storage grows faster than planned. Budgets get burned.
The instinct is to blame the workload. The reality is less flattering.
AI workloads punish inefficiency. They amplify waste that already existed. Redundant datasets, unnecessary joins, over-retained history, and poorly scoped transformations were tolerable when they powered nightly reports. They become ruinous when they sit on the critical path of AI systems.
Poor data hygiene leads to runaway cost because the platform does more work than it needs to. It processes data that should have been archived. It enriches context that is never used. It recomputes logic that should have been materialized once.
Cost control is an architectural outcome, not a finance exercise. When engineers understand data flows end to end, they can design for efficiency. When they do not, cost becomes an external constraint imposed after the fact.
This is why cost governance has moved upstream into engineering practice: measure unit costs, instrument pipelines, and design to avoid waste.
Teams that scale AI treat efficiency as a design requirement. They ask hard questions early. What data is actually needed. What freshness is justified. What assumptions can be encoded once instead of recalculated repeatedly. That discipline pays off well beyond AI use cases.
A common objection is that AI itself is too unstable for enterprise use. Models evolve. Outputs vary. The pace of change makes durable systems impossible.
There is truth here, but it is incomplete.
Teams with disciplined data foundations are scaling AI today. They are not chasing every new capability. They focus on reliability, clarity, and ownership. When models change, they adapt because their data layer is not a black box.
The difference is not talent or tooling. It is systems thinking. Organizations that treat data platforms as long-lived products rather than one-time projects have fewer surprises. They know what they own and where it breaks. AI becomes an extension of the platform, not a threat to it.
Blaming AI immaturity avoids a harder conversation. It is easier to say the technology is not ready than to admit the platform was never as solid as assumed.
AI did not break enterprise data platforms. It told the truth about them.
For years, many organizations optimized for output over understanding. They shipped faster than they documented. They scaled storage before ownership. They accepted ambiguity because it was convenient. AI removes that option.
This is not a failure story. It is an opportunity. AI acts as a forcing function that pushes data platforms toward maturity. It rewards clarity and penalizes shortcuts. It turns invisible debt into visible risk.
The path forward is not to pause AI adoption. It is to take data platforms seriously as long-term systems. Invest in ownership. Make lineage explicit. Design for efficiency. Treat context as infrastructure.
Teams that do this will find that AI does not destabilize their platforms. It strengthens them.

Hedgehog CEO Marc Austin joins Data Insights to break down open-source, automated networking for AI clusters—cutting cost, avoiding lock-in, and keeping GPUs fed from training to inference.
.webp)
There was a time in the late ‘90s when the first dot-com boom was underway, mobile phones were going mainstream, and personal computers were finally becoming portable. But the internet was still a physical destination. It was a place you went to, a connected desktop in a home, office, or internet café, not a parallel universe you could access on the go.
I started my career in the trenches, developing some of the industry’s first 802.11 (Wi-Fi) transceivers. Looking back, Wi-Fi wasn’t so much a technology breakthrough as it was an inevitable response to a shift in human behavior. We wanted to communicate. We wanted access to the riches of the internet. We wanted our computers. And we wanted them with us, all the time.
It was a chaotic, fragmented, loud and wildly innovative time. We had proprietary “Turbo Modes,” one-upmanship, conflicting standards, and dozens of startups and few “grown up” companies, claiming they owned the future. But eventually, and rapidly, an ecosystem developed that transformed Wi-Fi from a novelty into an essential utility, sitting beside water and electricity, within 20 years. There was an explosive catalyst, a gold rush, and an eventual consolidation around standards.
Today, as I watch the optics industry in 2026, I can’t help but feel the same electric hum in the air. The same fragmentation. The same confusion. The same catalytic force. The same gold-rush energy. The same inevitability that something is about to happen, must happen, to enable the latest seismic shift in human behavior.
If you squint your eyes, it almost looks like 2003 all over again…
What’s often forgotten about early Wi-Fi is that its success was not inevitable. Wireless is a shared medium. There is no such thing as a private RF universe. Interference, broken roaming, competing beam forming mechanisms, and failed interoperability didn’t just hurt competitors, it damaged customer trust in the entire category. If Wi-Fi was unreliable, it wouldn’t matter who had the fastest radio. The market itself would be dead on arrival.
That realization changed behavior. Through a few painful stumbles, the industry learned that there had to be a common baseline, a set of rules everyone followed, to keep the air clean and the experience predictable. Differentiation still mattered, but it had to be built on top of a shared foundation, not at its expense.
Competitors worked together. Standards bodies matured. Interoperability test beds emerged. Certification programs enforced compliance and guardrails. Vendors argued fiercely, but within boundaries that preserved the viability of the ecosystem. It wasn’t altruism. It was survival. Grow the pie first, then fight like hell for your share of it.
This ecosystem balance only worked because the cast of characters was diverse, and complementary. There were large, established players acting as the adults in the room, setting expectations around enterprise reliability, security, and scale. There were aggressive startups injecting energy, new ideas, and technical breakthroughs that pushed the state of the art forward. And there was Intel.
Intel wanted to make mobile computing inevitable. Creating a new, fast-growing category for higher-margin mobile processors was simply good business. But Intel did something unprecedented: it put its own balance sheet behind the ecosystem. A $300 million Centrino marketing campaign, unheard of at the time, made Wi-Fi synonymous with mobility, reliability, and interoperability. It was a spark that turned momentum into a conflagration.
Intel wasn’t alone. Cisco built enterprise-grade wireless networks that IT could trust. Microsoft pulled wireless deep into the operating system, normalizing it for developers and users alike. Dell and other OEMs made Wi-Fi table stakes in mobile computing. The ecosystem had champions. It had shepherds. And it had plenty of unruly sheep. Together, that unlikely combination produced one of the most successful infrastructure transitions in modern technology history.
Wi-Fi didn’t win because one company dominated early. It won because enough powerful players decided, independently, and selfishly, that growing the pie together mattered more than grabbing the biggest slice first.
For a long time, optics was boring, in the best possible way. Optical networking was reliable, predictable, and largely invisible. Bandwidth increased on a steady cadence. Power budgets were understood. Distances were fixed. Traffic patterns were well behaved. As long as you followed the playbook, the system worked.
10G became 25G became 40G became 100G became 400G. Roadmaps were clear. Margins were thin but stable. Optics was foundational, but rarely strategic.
Then AI broke the playbook.
The rise of large language models, agentic systems, and massive multi-modal workloads are driving an insatiable demand for compute, that simply does not fit inside traditional data center assumptions. Training and inference push processing density, east–west bandwidth, and latency sensitivity into entirely uncharted territory.
Clusters no longer scale in a single dimension. They scale up, packing more compute into a rack. They scale out, spreading workloads across rows and halls. And increasingly, they scale across, connecting multiple data centers into a single logical system.
At each step, the network had to keep pace. Exceptionally expensive GPUs cannot sit idle waiting for data. As clusters stretch across racks, buildings, and campuses, the network stops being a background transport and becomes a gating factor for utilization, determinism, and overall system efficiency.
Then the industry hit a power wall.
The constraint is no longer real estate or fiber, it is megawatts. New data centers are being built where power is available, not where latency is optimal and convenient. And that power constraint applies to everything: compute, switching, cooling, and optics alike.
The result is a mandate optics has never faced before, and must now satisfy simultaneously:
1. Dramatically increase compute scale.
2. Deliver higher speed and tighter determinism so GPUs never wait.
3. Reduce power consumption per bit, per port, per rack, per data center.
That combination changes optics from predictable plumbing into a first-order architectural constraint.
Copper interconnects, once “good enough,” are becoming a barrier at scale. Signal integrity, power loss, and reach limits are no longer theoretical, they are operational. Co-packaged optics, long discussed in labs and roadmaps, are now moving into real deployments, bringing optics closer to switch silicon and reducing copper distances and power consumption. Pluggable optics no longer monopolize the design space. Optical switching is re-emerging not as an experiment, but as a necessity.
In this world, optics stops being plumbing. It becomes both the limiting factor and the enabling force of AI infrastructure.
Just like early Wi-Fi, this shift triggers a burst of simultaneous, multi-vector innovation.
Startups attack every layer at once: new modulation schemes, novel laser technologies, co-packaged optics, disaggregated control planes, fiber automation, thermal management, and power-aware networking. Incumbents are forced to re-architect product lines that were stable for a decade. Conferences fill with competing visions, overlapping claims, and incompatible approaches.
It feels chaotic. Because it is. But once again, chaos is not a failure mode, it is a signal that the industry is very much alive.
And once again, chaos requires gravity. In the AI era, NVIDIA plays the role Intel once did. Again, not out of altruism, but out of self-interest. NVIDIA’s GPUs, interconnect requirements, and system architectures now define the shape of modern AI clusters. Their need for scale, efficiency, and determinism forces the entire optical ecosystem to evolve faster than it otherwise would. Like Intel with Centrino, NVIDIA is pushing the levers that expand the market, because doing so directly expands its own opportunity.
The hyperscalers are doing the same. Meta, Google, Microsoft, Amazon, and others are committing tens of billions of dollars to build AI infrastructure capable of supporting agentic workloads at planetary scale. They are willing to fund new architectures, absorb early inefficiencies, and accept real risk to break through existing limits.
If this all feels strangely familiar, it should.
Fragmentation? Check. The optics industry today looks a lot like Wi-Fi did in the early 2000s; fragmented, noisy, and bursting with parallel innovation. Dozens of companies are attacking adjacent problems simultaneously: DSPs, lasers, co-packaged optics, thermal management, fiber automation, disaggregated control planes. No single approach has emerged as “the” answer, and that uncertainty is driving experimentation in every direction at once.
Intellectual chaos? Check. The intellectual chaos is unmistakable. Conferences are filled with competing visions and overlapping claims, with multiple companies promising order-of-magnitude breakthroughs through fundamentally different architectures. Wi-Fi went through the same debates, MIMO, MU-MIMO, interference with incumbent RF systems, number of streams, multiple versions of beamforming, proprietary turbo modes vs standards. None of those questions had clean answers at the time, and optics is no different today.
Massive funding inflows? Check. A gravitational pull toward consolidation? Absolutely.
Capital is flowing freely, another familiar signal. Investors and operators alike sense that optics is no longer incremental plumbing; it’s a breakout category with strategic importance. That gravitational pull inevitably leads toward consolidation. We saw this clearly with Marvell’s recent acquisition of Celestial.ai, a move that signals the era of standalone components is ending. Just as Wi-Fi eventually centered around a small number of dominant silicon platforms, optics will likely converge around a handful of dominant players who can integrate those disjointed innovations into a platform.
And most importantly: a forcing function? YES! Wi-Fi needed to cut the wire, then work in dense deployments, then enable low-power IoT, and now act more deterministically to power our agentic future.
Optics needs to make AI scale physically possible; within switches, across racks, across data centers, without collapsing the grid that feeds it.
When a market has a forcing function, it must evolve. There is no choice.
We’ve seen this movie before, and Wi-Fi left behind a few hard-earned lessons that optics would be wise to absorb.
Lesson 1: Standards and Interoperability always win — even when it's messy.
Never bet against Ethernet and never bet against Wi-Fi. Proprietary performance advantages are tempting early on, but shared infrastructure lives or dies by common language. Great compromises in the days of 802.11g and 802.11n brought the industry together and left proprietary turbo modes as window dressing for the retail market. Coopetition flourished and the winners were those who embraced it. Optics will face the same tradeoffs, and the ecosystems that prioritize interoperability early will ultimately outlast those that don’t.
Lesson 2: The market rewards companies that grow the pie.
Intel didn’t sell access point silicon. Microsoft didn’t sell radios. Dell didn’t care which chipset won; they bought from everyone. What they all cared about was expanding the market itself. Their success came from making Wi-Fi inevitable, not exclusive. Optics needs its own version of that mindset.
Lesson 3: Simplification beats elegance.
Wi-Fi became ubiquitous not because it solved the RF problem perfectly, but because it made the technology easy for millions of people to deploy. Optics is approaching a similar inflection point. Operators aren’t asking for more clever architectures; they’re asking how to deploy across dozens of data centers, manage thermal and power budgets, automate fiber paths with fewer humans in the loop. Elegance helps, but simplification wins.
Lesson 4: The winners are ecosystem players.
The most successful Wi-Fi companies didn’t just ship chips; they built platforms. Reference designs, SDKs, certification programs, developer ecosystems, and trusted brands mattered as much as raw performance. Optics now has the same opportunity, but only if the industry thinks beyond feeds, speeds, and component optimization.
If the analogy holds, and I believe it does, then optics is entering a decade defined by startup energy, vendor consolidation, architectural standardization, and deep vertical integration. Complexity will be abstracted away. New platforms will emerge. The conversation will shift from components to systems, and eventually to experiences.
The companies that win won’t just be the fastest or the most clever. They’ll be the ones that make optics predictable, operable, and trustworthy at scale. They’ll lean into interoperability before the market forces it. They’ll treat power, cooling, and fiber as software problems. They’ll partner with kingmakers rather than trying to outmuscle them.
And most importantly, they won’t try to own the whole pie. They’ll grow it.
Because every major networking revolution, Ethernet, Wi-Fi, cloud, and now AI fabrics, follows the same arc: breakthrough, fragmentation, chaos, consolidation, and ubiquity. Optics is squarely in the fragmentation and chaos phase.
That’s not a bug. It’s the signal that the industry is alive again.
When I sit in modern AI datacenters and look at the optical racks, I feel the same thing I felt holding a pre-standard 802.11g PCMCIA card in 2002:
“We don’t fully know what we’re building yet, but when we do, it will reshape the entire industry.”
Wi-Fi unlocked mobility. Optics will unlock AI at scale. And just like Wi-Fi, the winners won’t be the ones who optimize a component in isolation. They’ll be the ones who understand that ecosystems, not components, determine the future.

In a move that signals a significant restructuring of the semiconductor IP landscape, Synopsys and GlobalFoundries (GF) today announced a definitive agreement for GF to acquire Synopsys’ Processor IP Solutions business. The deal, which includes the ARC processor family and related software development tools, marks a pivotal moment for both companies as they sharpen their focus on the burgeoning Physical AI opportunity.
The transaction, expected to close in the second half of calendar year 2026, will see Synopsys’ Processor IP portfolio—ARC-V™ (RISC-V) and ARC® CPU IP, DSP IP, Neural Network Processing Unit (NPU) IP, and related software development tools including ARC MetaWare Development Toolkits—move into the GlobalFoundries ecosystem. The transaction also includes Synopsys’ ASIP Designer™ and ASIP Programmer™ tools for automating the design and implementation of application-specific instruction-set processors (ASIPs).
GF’s announcement also calls out the included ARC product lines as ARC-V, ARC-Classic, ARC VPX-DSP, and ARC NPX NPU, and says that upon closing, these assets and expert teams will be integrated with MIPS, a GlobalFoundries company.
For Synopsys, the divestiture looks like disciplined portfolio management. By offloading its processor business, Synopsys is doubling down on its leadership in interface and foundation IP.
“We are focusing our IP resources and roadmap to further our leadership in essential interface and foundation IP while winning new, high-value opportunities that advance our position as the leading provider of engineering solutions from silicon to systems,” said Sassine Ghazi, president and CEO of Synopsys.
This focus is more than just marketing speak. As AI chips become increasingly complex, the bottleneck is rarely the processor core alone; it’s the high-speed connectivity (PCIe, CXL, DDR) and the fundamental logic libraries that enable multi-die/chiplet architectures. Synopsys is positioning itself to be the indispensable provider of the “connective tissue” that powers AI from the cloud to the edge, while continuing to dominate the EDA software market where they optimize implementations for all processor ecosystems.
For GlobalFoundries, this acquisition is an aggressive step toward becoming a platform provider rather than a pure-play foundry. By acquiring ARC and integrating it with MIPS, GF is building a more complete “Physical AI” stack.
Physical AI refers to the deployment of AI in the tangible world—wearables, robotics, automotive, and industrial IoT—where power efficiency and custom silicon are paramount. By owning the processor IP, GF can offer its customers more tightly integrated, end-to-end solutions, lowering the barrier to entry for companies that want to move quickly from concept to high-volume manufacturing.
“This acquisition doubles down on our commitment to advancing our leadership in Physical AI,” noted Tim Breen, CEO of GlobalFoundries. “By combining Synopsys’ ARC IP and MIPS technologies with GF’s advanced manufacturing capabilities, we are lowering the barrier for customer adoption.”
Assets transferred: The Synopsys Processor IP portfolio includes ARC-V™ (RISC-V) and ARC® CPU IP, DSP IP, NPU IP, related software development tools including ARC MetaWare Development Toolkits, plus ASIP Designer™ and ASIP Programmer™. GF additionally describes the included ARC product lines as ARC-V, ARC-Classic, ARC VPX-DSP, and ARC NPX NPU.
The divestiture of Synopsys’ Processor IP Solutions business fits the pattern of the “New Synopsys” story arc: a company increasingly defining itself as an engineering-solutions platform from silicon to systems, especially after Synopsys completed its acquisition of Ansys in July 2025.
Layer on the NVIDIA partnership news from December 1, 2025—where NVIDIA announced an expanded strategic partnership with Synopsys and disclosed a $2 billion investment in Synopsys common stock (at a stated purchase price of $414.79 per share)—and the strategic emphasis on simulation, digital twins, and AI-accelerated engineering workflows becomes even clearer.
For GF, this is a “Foundry 2.0” play. In a world where specialized AI silicon is the new gold, being “just” a manufacturer isn’t enough. By owning the IP (ARC and MIPS) and packaging it with software tools, GF is positioning itself to deliver more “foundry-ready” platforms—particularly for physical AI use cases where power, latency, and tight integration matter.
The industry is watching closely. This deal consolidates ARC and MIPS under one roof. If GF can successfully integrate these teams and maintain the neutrality required to keep ARC customers comfortable through the transition, it will have carved out a serious niche in the Physical AI era.

Over the past several weeks, escalating AI storage demand and lack of supply has begun to dominate tech headlines.
Industry coverage has pointed to enterprise HDD supply tightening sharply—Tom’s Hardware recently reported enterprise drives can be on backorder for up to two years, and it also noted HDD prices rose about 4% in Q4 2025, the biggest increase in eight quarters. Reuters reported in early December that AI-driven demand is contributing to a broader memory supply crunch, with manufacturers prioritizing higher-margin products and customers scrambling for allocation.
At the same time, the NAND market is flashing its own warning lights. TrendForce forecast that NAND Flash contract prices could rise 33–38% quarter-over-quarter in Q1 2026 as memory makers prioritize server and AI-related demand. And on the supplier side, Tom’s Hardware reported (citing Nomura) that SanDisk is expected to raise enterprise 3D NAND pricing for SSDs aggressively in Q1 2026—potentially more than doubling in some cases—tying the move to AI-driven storage demand and near-term supply pressure.
That backdrop matters for news out of VAST Data this week. In a briefing, the company framed the shortage as a market inflection point—and introduced a Flash Reclamation Program designed to repurpose NVMe SSDs already sitting inside customer environments, alongside a broader push around inference key-value (KV) cache persistence aligned with NVIDIA’s Inference Context Memory Storage (ICMS) platform direction.
In the briefing, VAST co-founder Jeff Denworth positioned the company as a meaningful consumer of enterprise flash via customer deployments, and framed the market as facing compounding constraints: HDD shortfalls pushing more demand into enterprise SSDs (especially QLC), plus a fresh wave of AI infrastructure requirements.
VAST says it will launch a Flash Reclamation Program designed to repurpose NVMe SSDs already sitting inside customer estates—including drives currently deployed behind other platforms—so customers can stretch existing media rather than wait on new allocations. In the Q&A, VAST was explicit that this can mean pulling SSDs from existing systems and redeploying them under VAST after rapid qualification.
Second, VAST argued that inference is about to generate a new class of storage demand as context moves from GPU memory into shared NVMe tiers, enabling faster reuse of prior context for long, multi-session workloads.
That second point maps closely to NVIDIA’s own platform messaging. NVIDIA has described ICMS as a BlueField-4-powered approach intended to extend inference context memory for multi-turn agentic AI and to support high-bandwidth sharing of KV cache across systems.
Meanwhile, the “HDD delays → more flash demand” narrative continues to circulate in the channel, with DigiTimes-linked reporting (and follow-on coverage) describing extended enterprise HDD lead times and increased interest in QLC alternatives.
VAST’s messaging lands because it’s not trying to create a problem—it’s trying to name one that independent sources are already surfacing.
The most revealing part isn’t the performance claims. It’s the go-to-market posture. A “reclaim the flash you already own” program is a shortage-era motion: it assumes constrained allocation, long lead times, and customers willing to tolerate disruption to free up scarce media.
On the AI side, KV cache is quickly becoming the next battleground for storage architecture narratives. NVIDIA’s ICMS framing makes KV cache persistence feel inevitable for long-context, multi-turn agents, and it creates a new category of “storage that behaves like memory.” VAST is positioning itself as the software and data-services layer around that shift—where efficiency, protection, and lifecycle controls become part of the ICMS-era value prop, not an afterthought.
In other words: the shortage story is bigger than VAST, but VAST’s response is a useful signal. When infrastructure vendors start building programs around reuse and reclamation—not just new boxes—it’s a sign the market expects constraints to persist, not clear up in a quarter.

I remember the early days of Wi-Fi, developing some of the industry’s first 802.11a/b/g transceivers. Back then, the mission was singular and remarkably simple: cut the wire.
Wireless has always evolved around its biggest pain point. First speed, then density, then IoT. Every era shifts when a new problem becomes the one we can’t ignore.
In the early years, the entire industry was engaged in a breathless race to make the air look like Ethernet. We obsessed over modulation schemes and channel widths, fighting physics to push throughput from 2 Mbps to 11 Mbps to 54 Mbps, and eventually toward Gigabit performance. Companies stacked on proprietary “Turbo Modes” and pre-standard features to squeeze out every bit and position themselves competitively.
And we won. The speed gap closed. Wi-Fi didn’t just catch wired performance at the residential edge and the enterprise edge — in many places it surpassed it.
Once raw throughput was “good enough,” the priority shifted. We moved from chasing speed to chasing density:
Can we make this work in a packed stadium?
On a subway platform in Tokyo?
In a high-rise where 200 access points sit next to and on top of one another?
That era led us to borrow techniques from cellular: OFDMA, MU-MIMO, BSS Coloring — tools to solve the wireless “cocktail party problem,” the RF equivalent of a noisy room where many devices speak at once and the network must separate overlapping conversations.
Then came the third wave: the Internet of Things. Suddenly, the devices connecting to our networks weren’t just laptops and phones; they were sensors, cameras, thermostats, wearables, industrial controllers, and all kinds of headless endpoints no one wants to update until it’s too late. The number of “things” began to outpace the number of people.
We realized that hauling all that data back to the cloud was often wasteful, so we started pushing compute outward — toward gateways, access points, and edge nodes — processing data closer to where it was created. The mindset shifted from performance to outcomes. Sensor networks don’t require much bandwidth, and no one cares what protocol they are using; they care about how the data is being used to make their lives better.
Today, we are hitting a new inflection point — one that makes the previous shifts look incremental.
In many enterprise environments, human client growth is no longer the main scaling driver. The next explosion in networking isn’t coming from people watching Netflix or scrolling Instagram. It is coming from autonomous agents. And unlike people, AI agents do not forgive “best effort.”
To see why, imagine a modern fulfillment center. Not humans pushing carts, but a hive of hundreds of Autonomous Mobile Robots weaving past each other at speed. Each robot negotiates right-of-way with a central controller, with safety systems watching for conflicts — a single distributed organism connected by an invisible wireless tether.
If that tether stretches into a noticeable hiccup — tens of milliseconds in the wrong moment — the system doesn’t “buffer.” It stops. A momentary disruption becomes a full-aisle shutdown. This is where “best effort” becomes a business risk rather than a minor annoyance.
To understand why the network architecture must change, you have to understand the difference between a human user and an AI agent.
Humans are incredibly adaptive. If you are on a Teams call and the video freezes for 500 milliseconds, you might grimace and cry out to your deity of choice, but your brain fills in the gap. If a web page takes an extra second to load, you wait. We are built to tolerate variance. Our networks were designed around this tolerance; we built best-effort systems that prioritized maximum throughput over consistent timing.
AI agents (robots, autonomous logistics bots, digital twins, and XR interfaces) are not adaptive in the same way. They require precision.
If a warehouse robot loses reliable connectivity at the wrong moment, it doesn’t “buffer”; it performs a safety stop. If an XR experience slips into noticeable lag, the user gets disoriented, or nauseous (“clean up on aisle 3”). These “users” don’t care about peak speed. To an AI agent, performance isn’t measured in gigabits per second; it’s measured in bounded variance.
Determinism means engineering to strict upper bounds on latency, jitter, and packet loss, and then meeting those bounds every time. “Good” is no longer a high average throughput. “Good” is the mathematical guarantee that 99.9999% of packets will arrive within a fixed window (e.g., 10 ms), regardless of RF congestion, multipath, or compute/buffer delay.
We are moving from an era of bandwidth to an era of determinism.
If the modern data center — with its massive GPU clusters — is the brain of the AI revolution, the wireless edge is the nervous system.
A brain in a jar is useless. To function, intelligence needs sensory input from the physical world. It needs to know who is in the room, where the asset is, what the environmental context is, and what the expected action (intent) will be.
This is the new mandate for the wireless edge. We must pivot from building “dumb pipes” that simply move data to building a sensory fabric that feeds context and intent to the enterprise AI.
This shift requires three fundamental architectural changes.
We need to stop marketing “fast” and start engineering “predictable.” The industry is acknowledging this reality, and Wi-Fi 8 is shaping up to emphasize ultra-high reliability in hostile RF environments, not just another massive jump in peak PHY rate.
This is a tacit admission that the race for raw speed is no longer the primary battle. The future of wireless lies in scheduling the air with the same seriousness we apply to wired switching: prioritization, admission control, traffic classification, roaming behavior that doesn’t spike tail latency, and continuous measurement of what the network is actually delivering.
Whether via private 5G or reliability-focused Wi-Fi evolution, the network must support SLA-like behavior for latency-sensitive machine traffic. For network designers, this flips the planning model: instead of asking “How fast can we make it?” we now ask “What is the worst-case delay this robot, vehicle, or agent can survive?” Determinism becomes the budget we engineer around.
In a world of autonomous agents, the distinction between “Wi-Fi” and “cellular” is often a distraction. The agent doesn’t care about the protocol; it cares about the outcome. We need a unified identity layer that can abstract away the radio physics.
A security robot moving from the parking lot (5G) into a warehouse (Wi-Fi) shouldn’t experience a policy gap. The policy must follow the identity, not the port.
In practice, this means policies can no longer live primarily in VLANs or subnets. They must live with the identity itself — tied to a device, workload, or agent — and remain consistent as it roams across spectrum, transport, topology, and physical location.
When humans click on phishing links, we train them to be better. You cannot “train” an infected thermostat or a compromised sensor. As we flood our networks with headless devices, the attack surface expands exponentially.
Security can no longer be a perimeter overlay; it must be intrinsic to the fabric. In this model, the chain of trust starts at the edge. The access point stops being a passive pipe and becomes an enforcement point: identity-based segmentation, continuous verification, and rapid containment at the first hop.
Architecturally, the edge is no longer a passive on-ramp; it is the first line of defense that can shrink blast radius immediately and feed high-fidelity telemetry into centralized policy and response.
We spent the last 20 years building networks that were excellent at delivering content to people. The next 20 years will be about building networks that deliver context from the physical world to AI models.
This is not just an upgrade cycle. It is a fundamental reimagining of why we build networks in the first place. The edge is no longer just about connectivity. It is the sensory interface for the AI era.
If you’re a network or infrastructure leader looking at this shift, the key question isn’t “how fast can the wireless network go?” The question is: can we support real-time, deterministic applications? Can we make policy follow identity across domains? Can we contain threats where they originate, not after they spread?
The technology to build this exists today. The “things” are already here. The agents are waking up.
We are done designing for human patience. Now, we must build the nervous system for machine precision. The 'Best Effort' era is over. The Deterministic era has begun.

Rose-Hulman Institute of Technology shares how Azure Local, AVD, and GPU-powered infrastructure are transforming IT operations and enabling device-agnostic access to high-performance engineering software.

At CES 2026, Synopsys is staking out a bigger role in automotive: not just enabling chip design or running point simulations, but virtualizing the end-to-end engineering workflow that software-defined vehicles (SDVs) require.
The timing is not subtle. Automotive teams are being asked to ship platform-like experiences on hardware that can’t behave like a smartphone supply chain. Electrification is rewriting architectures, autonomy is raising the bar on validation, and customers now expect the in-vehicle experience to improve after purchase via connected services and over-the-air updates. Synopsys puts a fine point on the economic pressure: profitability is increasingly driven by software, and traditional design-to-cost metrics can’t keep up with the scale of change.
What makes this announcement worth paying attention to is that Synopsys is framing virtualization as a business survival lever, not a technical preference. In the company’s telling, virtualizing vehicle electronics for design, integration, testing, and validation can reduce costs by 20–60% and accelerate time-to-market.
And that message lands differently given Synopsys’ evolving position in the ecosystem. The company is now integrating Ansys (deal completion was reported in July 2025) at a time when physics-based simulation and system-level verification are moving from “later stage” to “make-or-break early stage” in automotive programs. It’s also fresh off an expanded partnership with NVIDIA that included a $2B NVIDIA investment in Synopsys common stock, explicitly tied to AI and accelerated computing for engineering and design workflows.
In other words: Synopsys is building the narrative that the SDV era will be won by whoever can industrialize engineering itself.
Synopsys’ CES announcement opens with a clear premise: the industry’s biggest challenge is accelerating innovation “in the age of AI” while reducing cost and complexity. Then it repeats a theme we’re hearing more broadly across the SDV stack: virtualization needs to move left, earlier than it traditionally has, because late-stage validation is too slow and too expensive when the vehicle is becoming a continuously updated software platform.
Synopsys is also explicit about where it wants to sit: across “systems to silicon,” from system-level simulation to semiconductor design, enabling automakers and suppliers to virtualize silicon and software development, predict system performance, and optimize reliability.
The company anchors that strategy in three highlight areas.
Synopsys says it will support the Fédération Internationale de l’Automobile (FIA) to enhance single-seater safety standards, using design optimization and “predictively accurate digital human body models” to process thousands of parameters.
This matters beyond motorsport because it reflects a broader trend: safety requirements are expanding, and the industry needs high-fidelity methods that can scale. The old approach of iterating toward safety through physical testing alone is increasingly mismatched to compressed timelines and rising system complexity. Synopsys is positioning high-fidelity modeling and multiphysics simulation as the path to “more trials earlier,” not “more prototypes later.”
The second highlight is the integration of Samsung’s ISOCELL Auto 1H1 automotive image sensor into Ansys AVxcelerate Sensors, enabling high-fidelity simulation under “real-life conditions” early in the design cycle, without hardware.
This is a concrete example of what “shift left” looks like when autonomy and ADAS are part of the vehicle’s value proposition. If your perception stack is built on sensors whose behavior changes across conditions (lighting, weather, motion blur, glare, temperature), pushing realistic modeling earlier can reduce the number of expensive, late-cycle surprises. It also enables software teams to work against something closer to real sensor characteristics well before hardware integration is stable.
In the news announcement, Samsung frames this as letting OEMs “virtually experience real-world driving conditions” with predictive accuracy long before hardware integration.
Perhaps the most strategically important part of the announcement is Synopsys’ continued push around virtualization for electronics digital twins, anchored by Virtualizer Developer Kits (VDKs). Synopsys claims engineers can begin software development months before silicon is available, achieve system bring-up within days of silicon availability, and accelerate vehicle time-to-market by up to 12 months.
That claim is one of those “up to” statements that always deserves interrogation. But even if the median value is smaller, the direction is the point: in SDV programs, schedule risk often concentrates at integration, and integration risk often concentrates at the intersection of software, silicon, and systems. Anything that pulls integration and validation forward can change program math.
Synopsys also ties this directly to continuous updates: the Arm-focused VDK is positioned as supporting multi-ECU, multi-vendor integration and CI/CD pipelines “for continuous updates throughout the vehicle lifecycle.”
On the virtualization side, Synopsys calls out several partner-driven demonstrations and integrations:
Arm: Synopsys introduced a new VDK for Arm Zena Compute Subsystems, described as a standardized, safety-capable compute platform that can be used on-prem or in the cloud. Synopsys says this VDK provides a SOAFEE blueprint showcasing the OpenAD autonomous driving stack as a reference implementation.
IPG Automotive: Synopsys and IPG are demonstrating a multi-ECU prototype that integrates IPG CarMaker and Synopsys virtualization technologies via SIL Kit, with an explicit goal of establishing a continuous test strategy to improve software quality and reduce post-sale warranty costs.
SiMa.ai: Synopsys points to an integrated capability as part of a strategic collaboration, positioned as a blueprint for early virtual software development for AI-ready automotive SoCs used in ADAS and in-vehicle infotainment.
Then the company goes even more directly at the silicon platform layer:
NXP: Synopsys says it is expanding collaboration around VDKs supporting NXP’s new S32N7 family of high-performance computers for AI-powered vehicle cores.
Texas Instruments: Synopsys says TI is collaborating with Synopsys to provide a VDK for the TDA5 SoC family, enabling electronics digital twin capabilities that help engineers significantly accelerate time-to-market for SDVs. Synopsys says its Virtualizer VDKs can accelerate vehicle time-to-market by up to 12 months.
This partner list is telling because it mirrors the reality of SDVs: no OEM “controls” the full stack anymore. The hard problem is not deciding that SDVs are the future. The hard problem is getting suppliers, silicon vendors, tool providers, and software platforms to move with enough coherence that programs don’t stall at integration.
Synopsys is saying: we can be the connective tissue.
The bigger context: SDVs are forcing unlikely alliances
If you zoom out, the story here isn’t “Synopsys announced VDKs.” The story is that the SDV transition is pushing incumbents into uncomfortable collaboration — and also into fragmentation.
Robert Bielby captured this tension perfectly in a recent Voices of Innovation article, The Lamb Lays Down with the Lion to Avoid Being Eaten by the Wolf. He describes European OEM competitors collaborating on an open-source shared software platform for EVs as a response to competitive pressure from China’s EV momentum, while warning that these alliances are hard to sustain because “platform” boundaries blur fast: what’s commodity plumbing versus brand-defining differentiation?
Bielby also points out the overlapping landscape of efforts like SOAFEE and the Autonomous Vehicle Computing Consortium (AVCC), and how difficult it is for the industry to cleanly articulate how each differs.
This is where Synopsys’ CES positioning becomes more than marketing. The future of SDVs is not just about better software. It’s about a repeatable engineering operating model that can survive multi-vendor reality. Virtual prototypes, electronics digital twins, continuous test strategies, and CI/CD pipelines are not nice-to-have abstractions. They’re what make cross-company collaboration possible without collapsing under schedule pressure.
And Synopsys is building a thesis that it can deliver that layer — especially now, with Ansys in the fold and NVIDIA as a major partner and investor in accelerating engineering workflows.
While Synopsys is positioning itself as the connective tissue of the SDV era, its path is not without significant hurdles and aggressive competition. The very consolidation that makes Synopsys a powerhouse has also put it in the crosshairs of global regulators. To secure approval for the Ansys merger in 2025, Synopsys was forced by the EU and UK to divest key assets in optical solutions and power analysis software to preserve market choice. Analysts remain watchful of how seamless the integration of Ansys’ physics engines into Synopsys’ silicon tools will truly be, as cross-domain interoperability often suffers in the wake of massive corporate integrations.
Furthermore, Synopsys is facing a clash of the titans at CES 2026. Two major forces are challenging their narrative:
For OEMs, the choice isn’t just about if to virtualize, but whether to do so within the “walled garden” of an industry giant like Synopsys or through a more fragmented, open-standard approach.
The SDV conversation often gets stuck at the top of the stack: operating systems, middleware, autonomy frameworks, user experience. Those are real differentiators. But the industry’s most urgent constraint is the road to SDVs: the cost and time of validation, integration, and system bring-up as vehicle architectures become more centralized, more software-defined, and more AI-driven.
Synopsys asserts that virtualization is the enabling move that changes the economics. It’s not just about faster simulation. It’s about a software-first engineering model that makes earlier integration viable, reduces late-cycle risk, and supports continuous updates across the vehicle lifecycle.
The question is not whether virtualization becomes central. It already is. The question is which vendors can turn it into an industry-grade operating model that OEMs and suppliers can adopt at scale across fragmented platforms, shifting standards, and a competitive landscape that is forcing unlikely alliances.

Deploying the future: At CES 2026, the Arm ecosystem is delivering AI from the cloud to the front lines—powering mobility, robotics, and personal computing with fast, efficient, on-device intelligence.

Looking back on my 2025 predictions made me reflect on all that we’ve seen in the tech landscape in 2025: the massive silicon shakeup capped by last week’s Groq news, the rise of agentic computing demonstrated by actual practitioner advancement (see our interview with Walmart).
Reflecting also reminded me of what we haven’t seen yet: an edge explosion (more on this later), or a major AI corporate scandal.
As we turn the page look forward through 2026, my focus is on changes related to infrastructure and AI advancement and the human response to the changing relationships between machines and society. While I still think we are in early innings of the AI era, we are getting to the point of this arc where long term challenges are taking form, and we are starting to see how society is taking in this disruptive change, or rallying against it.
Without further ado, I offer my predictions for 2026:
1) We will see massive adoption of AI inference across the compute continuum from cloud to edge, and a new era of distributed autonomous computing will take hold.
Inference will be delivered based on economic efficiency, driving smaller model jobs to the edge at the point of data origin where efficiency of workload delivery will be the primary focus. For more complex inference, we will see highly tuned inference engines in the cloud deliver performance optimized results.
All will be delivered with bespoke silicon designed for the job at hand, allowing for silicon heterogeneity to continue to thrive in infrastructure deployments. This will be delivered by enterprise and their value chain partners with AI investment starting its slow climb of economic return.
2) AI oversight of machines will become critical in an agentic era.
We all saw the headlines during the fall of 2025: the massive outages at AWS and Microsoft, and how these outages rocked business. The truth is, the foundations of cloud architectures were built for a different generation of computing, and new forms of compute including agentic models with complex and lengthy workflow, are requiring some advancement of stack development that goes to the foundations of system state management.
True composable infrastructure – across compute, storage and network – will be required to provide agent control of workflow completion, and this means looking at the telemetry and management foundations of platforms to give better data to management suites. If you’re thinking… Allyson, we did this a decade ago… think again.
3) The conversations on data privacy and AI control will heat up, led by EU efforts to provide some thought to how AI models access data, how data is protected in this process, and who owns any semblance of IP when IP forms the foundation of model wisdom.
While I do believe the genie is somewhat out of the bottle on this topic already… a public backlash on what is human creation will drive conversations and action well beyond Silicon Valley. This will be driven, I think, by an AI advancement that will produce a fear backlash to the technology that we haven’t seen yet.
4) Brain drain will enter center stage in scientific computing circles as government contracts favor vector-based computing investment advancing AI over more traditional forms of compute needed for many areas of scientific modeling and research.
Think of things like airflow predictions to land planes or advanced climate modeling – studies that require calculation precision. With government grants drying up in some parts of the world (like the US) for this computing, scientists are seeking new shores to advance their research, leaving us with existential questions about the value of science in society.
5) We will see a massive advancement in quantum.
Maybe this last prediction is what I want to see, but with the gathering momentum of quantum compute, I believe we are in for a disruptive moment in creation of sustainable quantum workload delivery. With it, the potential to disrupt human advancement on knowledge well beyond the boundaries of traditional computing.
Buckle up. This year promises an exciting landscape for compute and human advancement. This article wraps the TechArena predictions series, and if you didn't check out the predictions in total, revisit the series here. While not all of these predictions are likely to come true, you can trust TechArena’s voices of innovation to bring you center stage for those that do while also shining a light on those innovations guaranteed to take us by surprise.

It’s hard to believe that we are on cusp of a new year where, here again, I am looking into my crystal ball to predict the 3 major trends that I believe will meaningfully affect the automotive industry. To be clear, these predictions aren’t thoughts that I have simply pulled out of thin air but reflect my observations of events that have transpired and that I expect will see significant traction in the future.
If you’ve been tracking the automotive industry lately, you’ve probably noticed some turmoil. This isn’t a cyclical downturn; it’s a fundamental rewiring of how cars are conceived, built, and sold. While this doesn’t affect the overall trends that I shared in last year’s predictions, I believe that we are in the midst of witnessing three transformative trends that will separate tomorrow’s leaders from today’s laggards:
(1) AI-driven product development compressing design cycles by 60-70% while revolutionizing how we certify safety;
(2) a widening software-defined vehicle divide where clean-sheet manufacturers sprint ahead while incumbent original equipment manufacturers (OEMs) trip over their own legacy architectures; and
(3) incentive withdrawal triggering a temporary hybrid resurgence, yet failing to halt the fundamental electric vehicle (EV) cost-crossover momentum.
Designing a car used to require three to five years of rigorous, sequential work. Those timeframes are starting to become a thing of the past. Today, leading manufacturers are deploying generative design algorithms that generate thousands of engineering-validated component concepts in hours—a process that used to require months of human iteration. BMW’s AI systems crunch millions of parameters simultaneously, optimizing crash safety, weight reduction, and manufacturing feasibility all at once. It’s not just faster; it’s fundamentally different. These algorithms explore design possibilities that would never occur to human engineers—like organic, biomimetic chassis structures that cut material usage by 40% while improving crash performance.
Safety compliance—the traditional bottleneck that required endless physical prototypes and crash tests—is getting a complete makeover through AI-powered virtual validation. Machine learning models trained on decades of crash data and regulatory requirements now predict compliance outcomes with 95%+ accuracy before the first prototype is even built.
Additionally, it will be the norm for AI to be employed in functional safety certification, particularly ASIL compliance under ISO 26262. What once demanded months of tedious traceability mapping and documentation review is now orchestrated by agentic AI systems that provide 24/7 compliance monitoring, automatically generating technical requirements and linking them to architecture and test cases.
When Euro NCAP (new car acceptance procedure) introduced new vulnerable road user protocols in 2023, AI-equipped manufacturers certified compliance months ahead of competitors still chained to physical testing cycles.
The real game-changer? AI creates a continuous improvement loop where vehicles evolve post-launch through over-the-air updates informed by real-world performance data. Auto OEMs like Tesla and ADAS chip suppliers like Mobileye are great examples of this approach, using its fleet as a distributed sensor network that feeds billions of miles of driving data back into design algorithms. The competitive moat isn’t just engineering expertise anymore—it’s the sophistication of AI training data and computational infrastructure. Early adopters are already achieving 50% reductions in development costs while launching vehicles that are simultaneously safer, more efficient, and more responsive to emerging customer needs and regulatory demands.
The promise of software-defined vehicles (SDVs), where hardware stays stable while features continuously evolve through over-the-air (OTA) updates, has created an existential crisis for automakers who’ve spent decades perfecting the exact opposite model. While Tesla and Chinese manufacturers like BYD push new functions weekly via OTA updates, incumbent OEMs remain shackled to three-to-five-year hardware refresh cycles that mirror their old development processes.
This isn’t just a technology gap; it’s a fundamental architectural disadvantage rooted in decades of supplier-dependent, siloed development. Clean-sheet manufacturers design computing architectures as integrated systems from day one, selecting centralized processors with two to three times headroom for future growth. Incumbents, by contrast, attempt to orchestrate SDV platforms across a fragmented ecosystem where individual Tier-1 suppliers own proprietary software stacks—creating a “Frankenstein architecture” where integration becomes the primary engineering challenge rather than innovation.
Recent industry events have brutally validated this structural handicap. Volvo’s recent announcement that they must provide physical hardware upgrades for the EX90—because its processing architecture became overwhelmed by escalating advanced driver assistance system (ADAS) and connectivity demands—perfectly illustrates the incumbent predicament. Having designed a “software-defined” platform with insufficient compute headroom, Volvo now faces the nightmare scenario: costly retrofits and dealer service visits that contradict the very premise of SDV flexibility. This stems directly from legacy thinking that optimizes hardware for launch-day requirements rather than a decade of capability growth.
Ford’s cancellation of its “Lightning” SDV platform tells a similar story of ecosystem collapse: after three years and hundreds of millions invested, the company conceded it simply could not orchestrate the 40+ software suppliers needed to create a unified, updateable architecture. The complexity of synchronizing partners with competing commercial interests, disparate code bases, and incompatible security frameworks proved insurmountable—particularly when each supplier sought to protect its intellectual property rather than cede control to a centralized OEM platform.
The market is bifurcating into haves and have-nots at a heightened pace. Clean-sheet players achieve not just faster feature deployment but fundamentally different business models: they capture software-driven revenue streams, improve vehicle performance post-purchase, and build direct customer relationships through continuous value delivery.
Meanwhile, incumbent OEMs face a brutal choice: absorb massive write-downs to completely re-architect their platforms or surrender the software layer to tech giants like Qualcomm or NVIDIA, effectively becoming hardware integrators in their own products. The estimated three-to-four-year delay in SDV deployment creates a compounding disadvantage: while viable SDV based vehicles refine their self-driving algorithms across millions of vehicles, traditional OEMs must wait for next-generation architectures before they can even collect comparable data.
The global EV incentive landscape is undergoing a dramatic unwinding that directly threatens the business case for electric vehicle development in Western markets. Germany’s abrupt cancellation of its €4,500 EV subsidy in December 2023 triggered an immediate 16% plunge in EV sales and forced Volkswagen, Mercedes-Benz, and BMW to freeze or delay multiple EV programs mid-development. The UK, having ended its plug-in grant in 2022, saw EV market share stagnate at 16% while hybrid sales grew 27% year-over-year. In the US, while federal IRA credits remain technically available through 2032, political headwinds are tangible: Republican-led states are blocking charging infrastructure funding, tightening eligibility requirements, and creating regulatory uncertainty that freezes OEM capital allocation. Ford’s $12 billion EV investment pause and GM’s delayed Ultium platform rollout aren’t strategic pivots; they’re direct responses to the removal of subsidies that made those programs financially viable.
The immediate beneficiary will be the hybrid, which incumbent OEMs are rapidly repositioning as the “rational bridge technology.” Toyota’s aggressive hybrid push—projecting 40% of its US sales will be hybrids by 2026—exploits this policy window. With no charging infrastructure dependency, lower price premiums, and immediate fuel economy benefits, hybrids offer OEMs a politically safe, capital-efficient compliance path. OEMs including Stellantis are following suit, retooling its electrification roadmap to emphasize plug-in hybrid electric vehicles (PHEV) in Europe and conventional hybrids in North America, essentially ceding the pure EV market to Tesla and Chinese imports for the next three to four years.
When $7,500 in tax credits evaporate, a $45,000 EV becomes a $52,500 psychological proposition, while a $35,000 hybrid remains exactly that. The engineering resources being diverted from pure EV programs to optimize next-generation hybrid powertrains represent a massive opportunity cost that extends the combustion engine’s lifespan and delays the very economies of scale EVs need to achieve true cost parity.
However, declaring an EV slowdown is to mistake tactical headwinds for strategic defeat. The momentum is simply shifting to geographies and segments where pure economics, not subsidies, drive adoption. China’s EV market grew 37% in 2024 despite negligible consumer incentives, powered instead by BYD and Geely delivering 300-mile range vehicles below $20,000. In the US, fleet electrification is accelerating regardless: Amazon’s Rivian rollout, FedEx’s EV delivery mandate, and Hertz’s continued EV expansion prove total cost of ownership advantages are real for high-utilization vehicles. Battery costs have dropped significantly since 2010 and continue declining at 8-10% annually, making the cost-crossover point inevitable.
The regulatory pressure hasn’t vanished. California’s ACC II mandate requiring 100% zero-emission vehicles by 2035 still stands, and the EU’s 2035 combustion ban remains in force. The “EV slowdown” narrative is a Western-centric illusion: globally, EV sales will hit 18 million units in 2025, up from 14 million in 2024. The real story isn’t retreat: it’s bifurcation, where incumbent OEMs, hobbled by capacity constraints and political risk, yield the mass market to nimbler competitors while fighting rearguard actions with hybrid technology.
These three trends don’t merely challenge the automotive industry. They actively dismantle it, creating an outcome where winners accelerate away from losers with compounding advantages. Successful companies will navigate this landscape by taking these three critical strategic shifts:
1. Transform Development into a Computational Advantage: AI-driven design isn’t a productivity tool; it’s the new basis of competition. OEMs must invest in extensive data gathering and AI infrastructure now or surrender engineering leadership to tech giants.
2. Architect for Software Velocity: The SDV transition demands immediate consolidation of software control. Incumbent OEMs must reduce supplier partners and accept near-term margin compression to own their architectures or permanently cede the customer relationship to ecosystem orchestrators.
3. Decouple EV Strategy from Western Policy Cycles: The incentive rollback is masking the ultimate long-term shift to permanent electrification. Winners are shifting R&D to China-aligned markets and fleet segments where the economics already favor EVs, treating Western consumer subsidies as nice-to-have rather than essential.
The companies that thrive won’t be those with the best internal combustion engines or the most efficient legacy factories. They’ll be the ones that recognize automotive manufacturing has become a data and software business that happens to produce vehicles.

By delivering performance with one-sixth the hardware footprint of competitors, the software-defined storage startup aims to make AI experimentation affordable at scale.
Organizations building out AI infrastructure have rapidly matured from struggling to understand GPU requirements to demanding scalable, cost-effective solutions that can grow with their ambitions. At the recent OCP Global Summit, I spoke with Roger Cummings, CEO of PEAK:AiO, and Solidigm’s Jeniece Wnorowski about how one company is tackling the infrastructure challenges that emerge as AI moves into production-scale deployments.
PEAK:AiO’s original breakthrough was software-defined AI storage that transforms commodity servers into high-performance infrastructure. “Our secret sauce very early was getting line-speed performance on a single server,” Roger said, thereby maximizing performance in the smallest possible footprint. This approach turns an ordinary server “into a rocket ship for AI” and helps organizations avoid massive deployments that consume excessive power, cooling, and rack space.
The efficiency gains are dramatic. Roger explained that competitors typically require 12 to 15 nodes to match the performance PEAK:AiO delivers with just one-sixth the infrastructure. Enabled by its close partnership with Solidigm, this density advantage translates directly into lower operational costs for power, thermal management, and physical space—critical factors as data centers face growing energy constraints.
Single-server performance solved the first wave of challenges, but today’s expanding AI applications demand the ability to scale across distributed file systems. The market is littered with proprietary solutions, so PEAK:AiO took a different path: in collaboration with Los Alamos National Laboratory, it developed an open-source parallel network file system (pNFS) built specifically for AI workloads.
Going open source aligns with industry standards and customer demands for simplicity and flexibility. Roger emphasized that the new pNFS solution “will match the performance of storage as well as the scale of the file system that people need today.” The company uses a modular framework that automatically recognizes new nodes as they’re added, delivering linear scaling for both capacity and performance. This architecture dramatically lowers the cost of experimentation and failure—an essential consideration for teams exploring new AI use cases. As Roger put it, “It doesn’t have to be cost-prohibitive to take risks and build innovation.”
PEAK:AiO’s value proposition has evolved along with the AI market itself. While large-scale training clusters once dominated the conversation, inference workloads—both in centralized facilities and at the edge—are now the primary growth driver. The company’s high-performance, scalable platform is ideally suited for both.
Roger also highlighted rising interest in federated learning, where intelligence is captured as close as possible to the data source before being rolled up into master models. PEAK:AiO’s infrastructure naturally supports these distributed architectures by enabling fast data capture and processing wherever the data is generated.
Looking ahead, Roger said, “We need less infrastructure and more success—and I think we’re a great partner to achieve that.” Future innovations from PEAK:AiO, developed in partnership with Solidigm, will create richer memory and storage tiers with deeper intelligence about AI workload patterns. This will allow automated, policy-driven movement of workloads to the optimal tier, further improving both performance and cost efficiency.
PEAK:AiO’s trajectory shows how infrastructure providers are evolving to meet AI’s real-world scaling challenges. Its focus on extreme efficiency, modular open-source architectures, and workload-aware optimization directly addresses the constraints of power, space, and budget while delivering the performance AI demands. As deployments shift from centralized training to distributed inference and federated learning, solutions that combine density with operational simplicity will become increasingly indispensable.
Learn more about PEAK:AiO’s infrastructure solutions at https://peak-aio.com or connect with the team to explore how their open-source approach can accelerate your AI initiatives.

During #OCPSummit25, Jeniece Wnorowski of Solidigm and I caught up with Jelle Slenters of RackRenew on how the firm converts retired OCP-compliant racks, servers, switches, power shelves and more into validated rack-level systems, complete with provenance, burn-in, and a joint certification label.
The cloud era taught us to think in fleets. The AI era is forcing us to think in megawatts. In between those realities sits an enormous pool of high-quality, standards-based gear that ages out of hyperscale production far faster than it ages out of usefulness. RackRenew’s thesis is simple: if we standardize the processes for take-back, test, refurbish, and certify—at scale—we can turn retired systems into ready-to-run capacity for the next wave of adopters.
That’s not a niche. As Jelle put it, the total addressable opportunity is in the “hundreds of millions,” and if we truly nail the collaboration across the industry, the upside is “beyond calculation.” The value comes from process: documented, repeatable, and reliable.
OCP is the right place to be talking about this because standards reduce entropy. Common form factors, power and management specs, and known failure modes mean you can design remanufacturing flows that aren’t bespoke for every asset. When you remove variance, you remove cost and time. When you add shared protocols, you add trust.
Enter the OEMs and platform providers. The opportunity is to co-design take-back and recert flows for entire OCP building blocks—racks, servers, switches, power shelves, and harnessing—not just components. That means shared diagnostics, firmware baselines, power/thermal tests, and a joint certification label that signals: remanufactured, validated, and backed by a warranty. That’s what moves circular gear from “nice idea” to procurement-approved infrastructure.
Two near-term landing zones stood out in our conversation:
If we get the ecosystem right, customers get predictable outcomes. And predictable outcomes are the only way circularity shows up in the production SOW.
I love a good sustainability story, but the reason this matters goes beyond sustainability into economics. In an era of equipment scarcity and grid constraints, circular supply unlocks capacity faster and cheaper. That means shorter time to deploy, lower embodied carbon, and better capex efficiency. And because OCP standards reduce integration costs, the savings aren’t swamped by engineering overhead.
Jelle’s outline for how this scales:
Do that across storage, compute, networking, and power, and the “hundreds of millions” TAM looks conservative.
Reliability comes from grading and process: rack-level power and thermal validation, network link integrity checks, server health screening, and repeatable test plans. The result is predictable service-level objectives and clear workload matching—without over-indexing on any single subsystem.
Circularity wins when it delivers new-grade outcomes. The end user shouldn’t have to adjust workloads or expectations because a rack is remanufactured. OCP standards make that possible at scale. The next step is trust infrastructure—joint labels, shared test artifacts, and warranties. RackRenew’s rack-level, certified approach makes circular capacity a practical default for new deployments, unlocking savings, faster turn-ups, and lower embodied carbon.
Learn more about RackRenew at their website.
Watch the podcast

Enterprises spent the past two years experimenting with generative AI and building successful proofs of concept. These early efforts delivered real value, yet they also revealed deeper architectural challenges that many organizations had not fully anticipated. As AI adoption grew, teams faced rising storage costs, slow refresh cycles, limited lineage visibility, fragmented governance, and new operational risks from automated agents.
Leaders are beginning to understand that long-term AI success depends less on the choice of model and more on the strength of the data foundation beneath it. The next phase of AI maturity will be shaped by how well organizations modernize their data platforms to support higher volumes, real-time insights, and greater transparency.
In 2026, five data infrastructure shifts will have a significant impact on which enterprises scale AI effectively and which ones remain stuck in pilot mode.
Early AI conversations focused heavily on GPUs, inference performance, and model architecture. As adoption accelerates, a new pressure point is becoming clear. The data layer is reaching its limits first.
Continuous AI workloads generate repeated embedding cycles, large vector indexes, multiple versions of the same data, and expanding volumes of metadata. Storage spending is increasing faster than compute spending for many organizations, particularly in retrieval-heavy and personalization workloads. These patterns highlight that existing architectures were not designed for always-on AI pipelines.
Organizations that invested early in unified lakehouse designs, lifecycle automation, and efficient tiering will enter 2026 in a stronger position. Others will need to prioritize data layer modernization to support AI at scale.
Agentic systems are emerging as a practical way to automate tasks such as case updates, triage, content generation, and workflow coordination. These systems reduce manual work and improve response times, but they also introduce new risks related to data quality and operational integrity.
In 2026, enterprises will begin to introduce a dedicated guardrails layer that governs how agents interact with data. This will include checks before an agent writes to a system, detailed logs of all automated actions, controlled environments for testing new behaviors, rate controls to prevent runaway loops, and data contracts that clearly define what an agent is allowed to do.
Organizations that implement this structure early will adopt agentic workflows responsibly and with confidence. Those that deploy agents without guardrails will face operational issues that slow progress.
AI systems achieve stronger results when they receive recent behavior, live events, and session-level context. As a result, the need for real-time data will continue to grow in 2026.
During 2025, many organizations observed that daily data refresh cycles were not sufficient for fraud detection, operational intelligence, or personalized digital experiences. In response, more teams are moving toward event-driven architecture and streaming pipelines that deliver fresh information directly into AI systems.
This shift will create wider adoption of continuous ingestion, closer connections between feature stores and streaming systems, and a reduced reliance on overnight batch jobs. Even partial modernization toward real-time data will lead to noticeable improvements in AI accuracy and responsiveness.
Executives, regulators, and internal risk teams are asking deeper questions about how AI systems operate. These questions focus on lineage, model inputs, data quality, access control, and the ability to review how decisions were made.
In 2026, governance will move from a manual review process to an integrated part of the data platform. Organizations will introduce automated lineage capture, consistent dataset and model documentation, versioning of training data and embeddings, policy-aware ETL pipelines, and comprehensive logs of how AI and agents interact with sensitive data.
Teams that embed governance directly into engineering workflows will scale AI programs more efficiently and with fewer audit challenges.
AI workloads drive significant energy consumption and storage growth. As reporting expectations evolve, organizations will begin measuring AI systems by efficiency as well as performance.
This will create new expectations around storage footprint per AI system, energy considerations for model training and inference, clear lifecycle policies for data and embeddings, and thoughtful workload placement in regions with cleaner energy profiles. Well-designed pipelines will reduce cost, support sustainability goals, and prepare enterprises for emerging reporting requirements.
Over time, efficiency will become a differentiator in how organizations deliver AI responsibly.
The year ahead will be an important moment in the evolution of enterprise AI. Organizations that succeed will be the ones that strengthen their data foundations and build platforms that support real-time intelligence, responsible automation, and transparent governance.
AI may begin with models, but it reaches its full potential only when the data ecosystem beneath it is ready. The enterprises that invest in these foundations today will be positioned to lead the next wave of intelligent systems in 2026 and beyond.

On November 18, 2025, the internet didn’t just blink; it froze.
A single bad configuration file deployed by Cloudflare effectively severed the nervous system of the modern web for four hours. While the headlines focused on websites going dark, the real panic was happening in the background: thousands of autonomous AI agents—the heralded “digital workforce” of 2025—suddenly went deaf and blind.
Without access to edge compute, the sophisticated AI infrastructures that companies had spent the year building simply vanished, leaving CIOs staring at blank dashboards, unable to diagnose whether their new intelligence layer was hallucinating or dead.
Less than 24 hours later, the checkbook came out to answer the silence.
On November 19, Palo Alto Networks announced it would acquire observability platform Chronosphere for a staggering $3.35 billion. The timing was too precise to be coincidental. In a world where a single config error can blind an entire enterprise, paying a premium for “x-ray vision” into your microservices isn’t just a strategy; it’s an insurance policy.
This 24-hour sequence—a catastrophic infrastructure failure followed immediately by a multi-billion dollar acquisition—encapsulates the tech industry’s defining story of 2025: the realization that AI is only as powerful as the fragile pipes it runs on, and the frantic land grab to own the tools that keep those pipes from bursting.
The market activity in 2025 was defined by a direct correlation between specific “Fear Events” (outages and failures) and “Safety Buys” (consolidation).
.png)
The deals listed above aren’t random; they map directly to three specific anxieties that plagued CIOs throughout the year.
If 2024 was the year of AI hype, 2025 is the year of AI governance as a platform.
The acquisitions detailed above are not merely asset grabs; they are architectural decisions. By owning the critical choke points of the AI stack—Identity, Data, and Observability—the tech giants are constructing a “Moat of Trust” that creates a nearly insurmountable barrier for point-solution startups.
The integration of CyberArk into Palo Alto Networks solves the single biggest headache for CISOs: Who is actually doing this?
In a traditional setup, a startup might offer a tool to monitor AI agents, but they can only alert a human. By feeding CyberArk’s privileged identity data directly into Palo Alto’s firewalls, the network itself enforces the policy. If an agent’s session token behaves outside its normal parameters, the connection is severed instantly. A standalone startup is merely a smoke alarm; Palo Alto is now the sprinkler system.
CrowdStrike’s acquisition of Onum fundamentally changes the economics of AI security. Until now, companies paid massive bills to ingest all their data into a SIEM and then paid another vendor to secure it. CrowdStrike has moved the security checkpoint upstream. Onum cleans, anonymizes, and enriches data before it ever reaches the AI model. Niche data privacy startups usually operate by scanning data at rest; CrowdStrike is now doing it in transit, rendering the startup redundant.
Google’s purchase of Wiz is the final piece of a self-healing cloud. Wiz was already the best at finding risks (visibility), while Google is the best at fixing them (automation). Now, when Wiz detects a misconfiguration, the system passes the alert to Gemini, which generates the code fix, tests it, and deploys it. The zone of death for startups is now any product that only identifies problems without fixing them.
The catastrophic outages of 2025 proved that the fragility of the internet is the ultimate cap on AI’s potential, turning minor configuration errors into global paralyses. By rapidly consolidating identity, data, and observability into a unified moat of trust, the tech giants have successfully sold a solution to this fear, but at the cost of erasing the competitive middle market.

Artificial intelligence has moved from experimentation to enterprise backbone. As organizations adopt AI for detection, automation, analytics, and decision support, adversaries are rapidly doing the same. The result is a new competitive landscape, where threat actors leverage models that adapt, reason, and evolve faster than traditional controls can respond.
In 2026, cybersecurity will be shaped by a convergence of machine-driven offense, machine-assisted defense, and a new class of risks that live inside the AI systems we deploy. Enterprises will face challenges not just protecting infrastructure, but protecting the very logic, memory, and autonomy of intelligent systems.
We’re entering a security environment where AI isn’t just embedded in technology, but becomes the attacker, the defender, the insider threat, and the policy engine all at once.
Over the past two years, generative AI has made it dramatically easier to produce executable code, including malicious software. What once required specialized skills and was mostly confined to research labs and experimental demonstrations is now circulating in underground marketplaces, packaged into tools, and shared among threat actors with little technical depth.
In 2026, this trend accelerates for a few key reasons:
• Open-source AI models that generate code are improving quickly, giving attackers the ability to produce malware that can rewrite sections of itself when needed.
• Technical expertise matters less, because mutation logic and exploit fragments can now be produced automatically rather than handcrafted by a seasoned developer.
• Many security tools still rely on recognizing familiar patterns, which AI-generated variants are purposely designed to avoid, making them harder to spot.
These shifts create a turning point. We are entering an era where malware can adjust how it looks or behaves each time it runs, making investigations slower and detection methods less reliable. Reverse engineering becomes more complex, response teams lose valuable time, and traditional defenses struggle to keep up.
In other words, 2026 marks the moment when self-adapting malware moves from theory to practice.
AI has already shown it can outperform humans in capture-the-flag competitions and automated exploit challenges. What used to be experimental is now practical. At the same time, several trends are pushing AI into a more active role in security work:
• Cloud environments are large and complex, and humans cannot evaluate risks fast enough on their own.
• Red teams and nation-state groups are already trying AI-assisted reconnaissance and vulnerability chaining, showing that machine-driven offense is moving from testing to early use.
• Security tools are shifting from copilots to more autonomous systems, able to plan and carry out tasks without constant direction.
In 2026, these developments start to come together. AI begins to take a leading role in finding weaknesses, deciding what to do next, and even executing parts of the attack or defense process. Both attackers and defenders benefit, with machines helping attackers scale their efforts and defenders getting support without needing more staff.
Security teams will need to focus more on supervising how AI systems make decisions. Organizations will adopt governance tools that can check how AI reached its conclusions, apply boundaries, and stop high-risk actions before they happen. Instead of just detecting threats, security programs will also evaluate whether automated actions are safe, appropriate, and aligned with policy.
Most enterprises underestimate how much autonomy they are granting their AI systems. SOC copilots, LLM-powered automation, AI knowledge bases, and AI-assisted decision engines increasingly rely on:
• Log histories
• Ticketing systems
• Knowledge articles
• Embedded memory
• Operational runbooks
These sources are rarely authenticated or monitored for tampering. At the same time, attackers have learned that influencing AI indirectly by corrupting the information it consumes can have greater impact than compromising infrastructure.
In 2026, this becomes a critical concern because:
• AI memory is becoming persistent.
• AI influence over operational processes is increasing.
• There are no mainstream integrity controls for AI context.
This creates a high-value blind spot that attackers will exploit.
The idea of an “insider threat” now includes AI itself. Organizations will need ways to verify the data their AI learns from, ensure critical documents can’t be tampered with, and constantly check that their AI systems are working with trusted information.
To navigate 2026 successfully, organizations should:
• Build detection based on behavior-driven anomaly modeling.
• Invest in adversarial AI testing capabilities.
• Create policies and validation logic to oversee AI-driven actions.
Cybersecurity strategy will increasingly resemble risk engineering for machine decision-making, rather than simple infrastructure defense.
2026 marks a transition point. Threats generate themselves, attackers automate decision-making, and the information an AI system trusts becomes an attack surface of its own.
These predictions are not speculative. They emerged from observable patterns in tooling maturity, attacker economics, and enterprise AI dependence.
Organizations that invest early will not only adapt, but will stand apart through stronger resilience, faster response, and trusted automation. If you are shaping how AI fits into your security strategy, this is the moment to begin. The next phase of cybersecurity will be defined by leaders who collaborate and act early.

It’s predictions season again, which means tech LinkedIn is about to be flooded with hot takes like “this is the year that ‘Mining on the South side of the Moon’ will get real.” Spoiler alert: humanoid robots won’t be folding your laundry by February, but I am watching this space reach an actual inflection point. Here’s what I think 2026 holds for humanoid robotics.
The humanoid hardware race has been exhilarating to watch, with many demos and initial customer deployments. But 2026 won’t be about more impressive backflips or faster walking speeds. We’ll hit the “hardware plateau,” where physical capabilities are good enough for real-world deployment and the bottleneck shifts entirely to intelligence and adaptability.
The winners will be companies whose robots can understand context, learn from observation, and respond appropriately to messy, unpredictable human environments. Foundation models trained on internet-scale data are already getting grounded in physical robotics. Expect humanoids that can genuinely learn new tasks by watching humans perform them once or twice, rather than requiring thousands of hours of simulation.
2026 will be the year humanoid robotics makes its first serious push into personal and healthcare settings. Not because the technology is perfect, but because the need is urgent and growing.
Our aging population crisis is not waiting for perfect robots. We are facing a caregiver shortage that no amount of policy can solve. Humanoid robots that can assist with basic tasks, provide companionship, and alert human caregivers to problems will move from pilot programs to early commercial deployment.
This is where my work at Machani Robotics comes in. We are developing AI companions that can sense, understand, and respond to human emotion with authenticity. The technical challenge is creating robots that know context: when someone needs encouragement versus space, when to start a conversation versus sitting quietly nearby.
Here’s my boldest prediction: emotional intelligence will become the deciding factor in humanoid robotics, and companies that ignore this will fail regardless of their hardware prowess.
As humanoids enter our homes, the question is not “can this robot lift 50 pounds?” It’s “will my grandmother trust this robot?” That trust comes from emotional attunement, not payload capacity. We need less Ultron, more Vision.
The breakthrough will come from combining multimodal AI (systems that simultaneously process facial expressions, voice tone, body language, and context) with robotics. In 2026, expect humanoids that can read a room, adjust behavior based on mood, and provide genuine companionship rather than just task completion.
We are deploying humanoid robots without agreed-upon standards for how they should behave, communicate, or share data. Every company builds proprietary systems. This fragmentation is dangerous when healthcare facilities need to integrate humanoids from multiple manufacturers. The Avengers had a hard enough time working together, and they at least spoke the same language.
I predict 2026 will see industry consortiums forming to establish baseline standards covering safety protocols, emergency shutdown procedures, data privacy, and interoperability. My hope is that we prioritize human-centered standards from the start. Not just technical specs for joint torque, but standards for how robots signal intentions, respect personal space, and handle emotional data.
2026 will bring our first serious regulatory frameworks for humanoid robots in homes and public spaces, with the European Union likely leading through extensions of their AI act. This will separate serious players from demo companies and drive industry consolidation.
Investment will flow toward companies with clear paths to revenue and specific use cases in senior care, hospitality, and security. The talent war will intensify, particularly for systems engineers who understand both physical and cognitive aspects of humanoid systems.
The companies that succeed will actively involve ethicists, mental health professionals, and diverse communities in their design process from the beginning. Standards and regulations alone will not build trust. Transparent development and genuine commitment to safety will.
I am more excited than ever about what humanoid robotics can do for human flourishing. Not because robots will replace human connection, but because they can amplify our capacity to care for each other and help people truly thrive.
The measure of success in 2026 will not be how human-like our robots look or how smoothly they walk. It will be whether they unlock fuller, richer lives. Can they help an elderly person not just maintain independence, but pursue new hobbies and stay engaged with their community? Can they give caregivers not just relief, but the space to be present and connected rather than exhausted? Can they provide companionship that helps someone flourish, not just cope with loneliness?
We stand at a rare moment where technology, need, and capability are converging. This is not about robots helping us survive with dignity. It’s about technology that helps us live with joy, purpose, and deeper human connection. The question is not whether humanoid robotics will transform how we care for each other. It’s whether we will build a future where technology elevates what it means to be human.
2026 will show us how close we are to making it real. As computer scientist Alan Kay said, “The best way to predict the future is to invent it.”

I recently caught up with Giga Computing’s Chen Lee about what’s really changing inside AI data centers. Spoiler: it’s not just bigger racks and faster GPUs. It’s how those racks get built, where they’re assembled, and how we plan for a world where inference becomes the dominant workload pattern.
One of the threads in our discussion was Giga Computing’s push on circularity—reclaiming, sorting, and returning components to a second life. Chen was candid: yes, circular practices eliminate waste and can reduce some costs, but they’re not a pure economic play. There’s real human work in sorting and qualification. The point isn’t cost-first; it’s responsibility-first—answering “the call for Earth,” as he put it. That framing matters. At OCP, sustainability isn’t a backdrop; it’s a design constraint. And circularity is moving from “nice to have” to “show me your plan.”
Training may steal the headlines—and budgets—but inference is the business. Chen’s view is that the next expansion wave will be dominated by inference-centric racks that look and behave differently: more elastically scaled, more network-sensitive, and more tightly integrated with edge and enterprise fabrics. That opens new addressable markets across sectors—healthcare, finance, oil and gas, education, government—each with unique latency, privacy, and cost envelopes. If training is a few giant mountains, inference is a mountain range: broader, more varied, and much closer to the users who depend on it.
Giga Computing’s emphasis is modularity: building blocks that let operators configure for today’s sprint and tomorrow’s pivot. In practice, modularity shortens time-to-capacity, smooths upgrades, and lowers the operational blast radius when something changes—like a new model family, memory footprint, or accelerator ratio. The companies winning rack scale are the ones who treat integration, test, and validation as first-class products—not just a step between BOM and shipment.
A second pillar: U.S.-based assembly. Giga Computing is standing up local capability to improve lead times, reduce shipping risk, and cut carbon that comes with long logistics chains. There’s also a quality and reliability angle that’s easy to overlook: racks that don’t spend weeks getting rattled across oceans arrive with fewer transport-induced gremlins, making burn-in and final test more predictive. Chen flagged SKD approaches that enable “Assembled in America” servers—another lever for responsiveness when demand spikes and when regulatory or customer requirements insist on a regional footprint. In a supply chain still healing from 2020–2022 shocks, proximity is performance.
Chen was direct about the macro arc: AI’s disruption dwarfs the industrial revolution. That’s not hyperbole on the OCP show floor; it’s the operating assumption for everyone building AI factories. But the second clause matters: be careful about the road we go down. For vendors, that means shipping capacity with guardrails—power-aware, grid-friendly, and supportable. For operators, it means designing for efficiency, circularity, and people—because the most constrained resource in this market might be expert hands, not megawatts.
It’s clear that Giga Computing is leaning into OCP’s core ethos—openness, efficiency, and scalability—while aligning to the buyer reality of 2025: more inference, faster turns, and a stricter carbon ledger. The choice to invest in U.S. assembly is a clear signal. The focus on modular rack-scale integration is another. And the honesty about circularity’s costs—and its necessity—reads as maturity, not marketing.
But the larger takeaway is ecosystem-level: OCP’s center of gravity is shifting from “can we build it?” to “can we scale it responsibly, locally, and profitably?” As AI diffuses into every sector, vendors that align on all three tend to be better positioned to keep shipping through the next wave.
If you want to learn more, Chen points to Giga Computing’s site and his LinkedIn.
The ground is shifting under data center infrastructure—away from one-off training builds toward the day-to-day realities of inference at scale, tighter lead times, and verifiable sustainability. In that context, Giga Computing is getting more sophisticated in addressing enterprise requirements—focusing on modular rack integration, treating circularity as an engineering process, and leaning into regional assembly for predictability and QA.

In Part 1 of this series, we looked at the physical side of the transformation – AI factories, 800 VDC, liquid cooling, and circular infrastructure.
But hardware is only half the story.
As we move into 2026, data centers are not just engineering projects. They are instruments of national policy, flashpoints for public debate, and symbols of digital sovereignty. The young data center industry is maturing fast and starting to behave more like a utility: tied to power and water infrastructure, wrapped in regulation, and caught up in geopolitics.
This second part looks at how power, policy, and public trust will push data centers further into “utility” territory in 2026.
In 2026, more governments will treat data centers as critical national infrastructure, on par with power, water, and telecom.
Mega-scale AI campuses are already collaborating directly with utilities on long-term power contracts and grid planning. National digital strategies increasingly call out data centers as strategic assets.
The next step is structural. Expect more state-owned or majority-stake data center entities in markets where digital sovereignty is a priority. In some countries, the same organizations that run power or fiber networks will also operate core data center capacity. Power and data will converge not just technically, but institutionally. On the flip side, commercial could take over utilities, like we saw when Microsoft took over a nuclear power plant. The possibilities beg the question, which model will prevail?
At the same time, the anti-data center movement will keep growing.
Energy and water consumption are obvious flashpoints. In EMEA, regulations are already forcing operators to justify every megawatt and liter. In the US, the rules are looser, but community pushback is rising, especially where people feel surrounded by new builds.
In 2026, “build it and they will come” will not cut it. Large projects that do not engage communities – and clearly demonstrate local benefits – will face protests, legal hurdles, or outright rejection.
Operators will need a social license to operate: transparent reporting on resource use, clear community benefit programs, and honest conversations about trade-offs. Data centers are here to stay; invisible, unaccountable ones are not.
Governments are starting to use the strongest lever they have: tying growth to resource efficiency.
Heat recycling is moving from ESG slideware to hard requirement. EU rules are pushing toward mandatory heat reuse for new facilities. Selling excess heat into district networks can offset a real share of energy costs – if the infrastructure or use-case exists.
Most older sites were not built to capture and reuse heat; retrofitting is expensive. New builds, however, will be expected to integrate with local heating systems or other industrial processes that can absorb waste heat. Water will follow a similar pattern: regions under stress will demand closed-loop cooling, reuse, or alternative technologies.
By the end of 2026, heat and water plans will be as central to a data center proposal as power budgets and fiber routes.
Sovereignty is becoming one of the main forces shaping where data center and AI capacity is built.
Nations want control over their data, their models, and their digital infrastructure. High-profile outages at large cloud providers reminded everyone that concentrating critical workloads with a few global players creates a huge single point of failure.
This does not mean cloud is dead. But the growth curve will bend. In 2026, expect slightly slower cloud adoption and more deliberate hybrid strategies, especially in regulated and sovereign contexts.
Public-private regional clouds, national AI platforms, and targeted repatriation of sensitive workloads will create a more distributed, sovereignty-aware data center landscape. Compute becomes a strategic resource, not just an IT line item.
As data centers become tied to AI and quantum capabilities, export controls will tighten.
Advanced GPUs, high-bandwidth memory, accelerators, and some quantum and networking components are already restricted in certain corridors. In 2026, controls will broaden to cover more hardware, software, and services.
For operators, this reshapes the global map. Some countries will struggle to access cutting-edge hardware. Others will double down on domestic design and manufacturing to reduce dependencies. Cross-border partnerships will need careful navigation to stay on the right side of the rules.
Tech nationalism is not new, but AI raises the stakes. Data centers now sit inside a broader geopolitical conversation about who controls the next wave of economic and security advantage.
As AI enters Gartner’s “trough of disillusionment,” hype will wobble, but real use cases will continue to grow.
Alongside that, AI will create new ethical headaches: deepfakes, synthetic media, biased models, opaque decisions about credit, hiring, healthcare, and access to services.
In 2026, digital ethics starts to get operationalized:
Culture will feel this too. It is not hard to imagine a first Hollywood-grade AI-generated movie success sparking fresh debates about authorship and creativity.
Most people still treat “the cloud” as something abstract. They do not picture the buildings, power lines, and water systems behind their apps. That will begin to change in 2026.
As data centers appear more often in headlines – around energy, outages, sovereignty, or ethics – operators and governments will need to get better at explaining them. Expect more transparency dashboards, open days, and school or community programs.
As the industry matures, “data center” becomes too generic. Not all sites serve the same role, and in 2026 that will show up more clearly in how they are described and regulated.
Categories will solidify:
Operators will increasingly lean into this language. Regulators and utilities will follow, tailoring expectations to the facility type. An AI factory next to a small town is not the same as a modest enterprise colo – and it should not be treated as such.
On the inside, data centers are being rebuilt around AI, power efficiency, and circular infrastructure. On the outside, they are being pulled into national strategy, community impact, and digital ethics.
The question for 2026 is not whether data centers become more utility-like. That is already happening. The real question is how well the industry can balance three things: relentless demand for compute, finite natural resources, and a society that is only just waking up to how dependent it has become on all this hidden infrastructure.

Data centers have always been power hungry, but the AI revolution has transformed them into energy consumers on an unprecedented scale. My recent conversation at the Open Compute Project Global Summit with Dr. Andrew A. Chien, professor of computer science at the University of Chicago and senior scientist at Argonne National Laboratory, alongside Solidigm’s Jeniece Wnorowski, revealed how AI’s insatiable appetite for energy is forcing a fundamental rethink of data center infrastructure.
Andrew brings a unique perspective to these challenges. He has worked both in academia and the information technology industry, including as a senior executive leading research at Intel. Through his journey, he’s learned which problems academics are uniquely positioned to solve versus those better suited for industry. His current focus spans two areas: accelerators for scalable graph analytics, and how data centers interact with the power grid. The latter brought him to OCP Summit.
The trends driving change are impossible to ignore. Andrew said he began thinking about the trend toward higher power density more than a decade ago. “I’d sort of figured Moore’s Law was coming to an end,” he said. “And that means that computing, which we seem to have an infinite appetite for, was going to consumer more and more power.” Anticipating this, he launched the Zero-carbon Cloud project to explore how data centers might harmonize with an increasingly renewable-based power grid.
The challenge extends beyond raw megawatts. As power grids transition to renewable sources, they’re becoming inherently volatile. Solar and wind generation fluctuate based on weather and time of day, creating periods of abundance and scarcity. Data centers, designed to run flat out at full load to maximize value, must now find ways to coexist with this fluctuating supply without sacrificing performance or reliability.
Andrew’s solution centers on micro grids that provide flexibility in how data centers consume power and manage thermal loads. The concept addresses the fact that power grids are built to meet peak demand, which means they face stress during just one percent of the year. During those peak moments, if data centers could back off slightly from their demand on the main power grid for a few hours, they could dramatically ease its strain.
With a micro grid in place, such an offload could be possible without disrupting operations in the data center. AI training, inference, and other computing workloads could continue to run nearly uninterrupted, while the grid gains the breathing room it needs during stress periods. The micro grid would act as a buffer, filling gaps between what the grid can provide and what the data center requires.
Andrew’s recent research demonstrates that these power micro grids can be deployed as a small fraction of total data center cost. Lightweight generators and small-scale storage technologies make the approach economically viable even for massive facilities. The technology exists and is affordable relative to overall data center investments.
If the technology is ready and cost-effective, what’s holding back adoption? Andrew identified two primary barriers, neither purely technical. First comes the question of who pays. Second, and perhaps more complex, is establishing clear responsibility between data center operators and power utilities.
Current debates between data center companies and power grids focus on connection costs and shared responsibilities. These discussions are breaking new ground, creating precedents for an entirely new relationship between computing infrastructure and energy systems. Andrew emphasized a lesson from his industry experience. “It’s not that industry won’t pay. It’s they want everyone to pay fairly. They don’t want to be disadvantaged,” he noted. With clear standards in place, he says, “We can have our cake and eat it too.”
The cultural challenge may prove equally significant. Data center operations have historically prioritized reliability above all else, with infrastructure optimized for stable, predictable conditions. Moving to dynamically managed systems that respond to fluctuating power availability requires embracing flexibility in an industry built on consistency. For organizations where reliability culture defines their identity, this shift can feel uncomfortable.
While AI dominates current data center conversations, Andrew sees the massive infrastructure being built today as enabling far more than machine learning. The scale of computing infrastructure now available will support diverse applications and create opportunities across many domains. “There are other kinds of computing that are going to be enriching our lives, creating commercial opportunity, leading to exciting research for many more years to come,” he says.
Andrew A. Chien’s work illuminates the infrastructure challenges hiding beneath AI’s exponential growth. His vision of micro grid-enabled data centers is a fascinating blueprint for sustainable computing at scale. As renewable energy transforms power grids and AI-enhanced workloads push data centers to new extremes, the solutions emerging from collaborations between academia, industry, and organizations like OCP will determine whether we can support computing’s future. The path forward requires not just technology but clear standards, shared responsibility, and willingness to embrace dynamic management in an industry built on stability.
Learn more about Andrew’s research at the University of Chicago Computer Science website.

Marvell Technology has entered into a definitive agreement to acquire Santa Clara-based Celestial AI. The transaction is valued at $3.25 billion upfront, with approximately $1 billion in cash and $2.25 billion in stock. There is an additional $2.25 billion in potential earnouts based on revenue milestones.
The deal is expected to close in early 2026.
While Celestial AI is not yet a household name, the acquisition would give Marvell control over one of the most critical technologies in hyperscale architecture: the "Photonic Fabric." Celestial specializes in optical interconnects designed to separate memory from compute.
In current AI infrastructure, chips like NVIDIA’s Blackwell are often bottlenecked by how fast they can fetch data from local memory. Celestial AI uses light (photonics) rather than copper electricity to fetch data from remote memory pools at speeds and latencies that mimic being on-chip. If successful, this acquisition allows Marvell to sell the essential optical plumbing required for the next generation of massive AI clusters.
This acquisition arrives at a critical inflection point for AI infrastructure in late 2025:
If the Palo Alto/CyberArk deals of 2025 were about building a "Moat of Trust" (software), this deal is Marvell attempting to dig a "Moat of Light" (hardware).
We view this acquisition as a defensive masterstroke and a high-risk integration challenge:
This isn't just a chip deal; it’s an infrastructure bet. Marvell is betting that the future of AI isn't just bigger chips, but better wiring. If they are right, they just bought the nervous system of the 2026 data center.

Predicting the future of data centers is always a gamble, but one thing is clear: 2026 will be a year of reckoning. The industry is no longer just powering the digital world – it is becoming the backbone of modern society.
AI is at the center of this shift. AI factories became the new benchmark for hyperscalers in 2025. In 2026, their influence extends much further: power, cooling, space, and supply chains are all being reshaped around AI’s appetite for compute.
At the same time, the data center industry is rapidly maturing and starting to look and behave like a utility. Energy availability, grid stability, and long-term resource planning are now board-level topics.
This first part of my two-part 2026 predictions series looks at the physical side of that transformation: how AI will rewrite data center infrastructure in 2026.
In 2026, we will see the first wave of truly gigawatt-scale AI campuses moving from announcement to reality.
Hyperscalers are pouring billions into custom silicon, liquid-cooled mega-clusters, and, in some cases, dedicated power infrastructure. They are effectively building digital power plants: facilities where the fuel is energy and data, and the output is AI models and services—and residual heat.
These projects put tremendous strain on local grids. Large AI training jobs that have spiking compute demands already have a visible impact on grid stability. To keep building, operators and utilities will need to plan together: long-term contracts, shared investments in new generation, and smarter demand management. Not every data center will be an AI factory, but the ones that are will set the pattern for utility-scale digital infrastructure.
Traditional power distribution and air cooling are hitting their limits.
Architectures like NVIDIA’s Kyber racks – with vertical compute blades, 800-volt direct current (VDC) distribution, and liquid cooling – point to where high-density AI infrastructure is heading. Higher voltage means lower losses, less copper, and more efficient use of space and power.
In 2026, 800 VDC and direct-to-chip or cold-plate liquid cooling will start to move from “bleeding edge” to “expected baseline” for dense AI racks. Operators that design new facilities around legacy assumptions risk locking themselves out of future deployments.
The momentum behind the Open Compute Project (OCP) is now, in my humble opinion, unstoppable.
What began as a hyperscaler-driven effort has become a mainstream movement. OCP’s open standards and reference designs are increasingly the only realistic way for next-wave cloud providers to approach AI-ready infrastructure without reinventing everything themselves.
NVIDIA’s MGX ecosystem and OCP’s work on busbars and liquid-cooled power shelves are turning OCP into the common language for building dense, efficient AI clusters. In 2026, OCP will shift from “interesting option” to “default starting point” for new AI capacity, especially for those without hyperscaler budgets.
Not every facility will become a full AI factory, but data centers will need to accommodate some level of AI compute capacity.
Hyperscalers will dominate training of the largest models. But inference – and smaller-scale training and fine-tuning – will be everywhere. Enterprises want to use their own data for vertical-specific use cases, without sending everything to a public cloud.
That means even “general purpose” sites will adapt: carving out high-density AI pods, upgrading network fabrics, and adjusting power and cooling envelopes. In 2026, being “AI-ready” stops being a marketing phrase and becomes a basic design requirement.
Edge computing is experiencing a renaissance.
Edge devices capable of running AI workloads are unlocking new autonomous capabilities in cities, factories, logistics, and retail. These use cases demand low latency and local data processing. Shipping everything back to a central AI factory simply does not work in every scenario.
In 2026, more operators will repurpose older or smaller facilities as edge AI nodes. Sites that previously hosted caches or basic web workloads will be upgraded to run inference clusters, small training jobs, and data aggregation pipelines. For many smaller players, winning at the edge will be more realistic than competing in hyperscale training.
AI dominates headlines, but quantum is quietly entering the conversation.
The immediate impact in 2026 will be post-quantum cryptography rather than quantum compute capacity in every data center. As awareness of “harvest now, decrypt later” strategies grow, operators will look at quantum-resistant encryption schemes across networks and storage.
Government roadmaps, such as the U.S. CNSA 2.0 milestones, are already shaping procurement. New network equipment and security systems will increasingly be expected to support post-quantum algorithms. A handful of commercial quantum-focused facilities will appear, widening the capability gap between the “AI and quantum haves” and everyone else – and forcing operators to think about how their own data centers will eventually integrate with a quantum ecosystem.
The ripple effect of AI adoption continues to hammer the supply chain. GPU, memory, and storage shortages, longer lead times, and rising prices are not going away in 2026.
Under that pressure, the industry will move toward more circular models. Reuse of infrastructure will become more common. Life cycles for servers, racks, and power gear will be extended. Retrofits will be preferred over greenfield builds when possible.
Instead of ripping and replacing entire halls, operators will look at modular upgrades: swapping accelerator trays while reusing power, cooling, and networking backbones. Older facilities and hardware will be repurposed as edge nodes or secondary inference sites. Scarcity and sustainability will finally be aligned, not in conflict.
As AI systems scale out, copper is struggling to keep up. You can only push so much bandwidth over so much distance before losses become unacceptable.
In 2026, photonics moves from science project to serious pilot. We will see more experiments with optical interconnects inside and between racks, aiming to cut power and boost bandwidth. With land and energy constraints mounting, hyperscalers are eyeing extreme frontiers. Google’s patents for orbital datacenters and projects like Holland Datacenters’ Cyberbunker hint at a future where datacenters operate in space or underground. These solutions could reduce Earth’s footprint—or simply offload the problem to a new domain. Either way, they’re exclusive, expensive, and energy-intensive to launch and maintain.
At the same time, a handful of players will test extreme frontiers: underground bunkers, underwater modules, even orbital data center concepts. These are niche experiments, but they show how far the industry is willing to go to secure power, cooling, and space.
The common thread: once data centers start acting like utilities, they face the same hard questions. Where do you put them? How do they interact with communities and the environment? And what happens when they fail?
Taken together, these shifts point toward a simple conclusion: in 2026, data centers will look less like anonymous buildings full of servers and more like complex, utility-grade plants engineered around AI.
AI is the forcing function, but the implications go far beyond adding GPUs. Power architectures, cooling designs, supply chains, and site strategies are all being rewritten.
In Part 2, we move from steel and silicon to power, policy, and public trust – and explore how regulation, sovereignty, and ethics will shape the next chapter of data centers as the new utilities.

As we enter 2026, the conversation around artificial intelligence (AI) is no longer just about automation or job displacement. There’s a new AI Divide that’s all about who gets access to meaningful, reliable information in an AI-powered world.
This divide is already reshaping the workforce, threatening economic stability, and creating barriers to reliable information. Let’s explore how these changes will define 2026.

AI has long promised to handle tedious tasks, freeing up humans for more creative and strategic work. However, a divide is already evident in the workforce, where AI adoption is creating winners and losers.
Next year we’ll see a continuation of layoffs that are directly the result of the race to adopt AI. CNBC reports companies like Klarna, Duolingo, and Salesforce have stated that AI is taking over tasks formerly handled by now redundant staff. This is about cutting costs and redirecting funds to AI infrastructure.
This trend raises a critical question: is this a genuine evolution of the workforce or a new form of “AI-washing,” where companies use technology as a convenient excuse for traditional cost-cutting measures? Regardless of the motive, the consequences are real for the employees affected.
Since AI can impact workers in every sector, how many people will ultimately be affected? Yahoo Finance discussed a recent Goldman Sachs research paper that claims 6% to 7% of American workers will be displaced. They calculate that 6% would mean about 10 million jobs will be eliminated in the name of AI from the 170-million strong US work force. The scale of this transition will test our economic and social resilience in profound ways.

The economic fallout from widespread job loss could cascade into other areas. Without careful management, the AI Divide could widen, leaving many behind.
An interesting, if unsettling, analogy has emerged: comparing this potential crash to a wildfire. In this view, the disruption is a necessary, almost natural, event that will clear out old structures and allow for new growth. But there’s a fundamental flaw in this comparison.
Wildfires are often preventable. Ecological experts and Indigenous knowledge teach us that regular, controlled burns are essential for a healthy forest. Native people used “frequent, low-intensity fires” to strengthen crops, encourage wild plants to grow, achieve certain nut crops, and direct wild animals to graze in certain areas. These practices clear out underbrush and prevent the buildup of fuel that leads to catastrophic, uncontrollable blazes. Without this careful management, ecosystems are put at risk, endangering everything within them.
Like wildfires, the unchecked growth of AI could force us to accept devastating losses, leaving us to hope somehow a phoenix eventually rises from the ashes.

Just as wildfires can devastate ecosystems, the rise of gated information threatens to create a new kind of inequality in the digital landscape.
We will begin to see a big shift in how we are able to access information. Currently, nearly everything published on the open internet is used to train AI models. As this continues, we may see a future where high-quality, real-time, and verified information becomes a premium commodity.
AI-powered public relations will dominate enterprise communications, shaping narratives with precision. In order to keep this data out of generative AI models, access to unfiltered, expert-driven data and insights will only be provided to vetted customers and promising sales prospects.
Those who are creating content will lock their data behind substantial paywalls. This will create a stark divide between those who can afford credible information and those who are left with AI-generated content of varying quality. Technical content will be sparse because the employees who created this information have been let go.
A similar but smaller shift happened after the dot-com boom. The gap in timely information led to the rise of blogging as a way to share information more freely.
In 2026, the stakes could be much higher. Companies that are brave enough to go against the grain and keep their technical content creators will have a competitive advantage.
As we navigate 2026, bridging the AI Divide will require prioritizing human connection and equitable access to information.
The rise of AI doesn’t have to lead to mass unemployment and information inequality. It is important to understand what AI actually does so we can use it for the betterment of all society. That is my last prediction: AI will remain intentionally poorly defined, as tech leaders realize remediations such as grounding are misdiagnoses and we have no clear path to “artificial general intelligence.”
In the meantime, the rest of us need to turn our attention to planning some controlled burns before we’re taken over by an AI wildfire. One way to do that is to prioritize real, authentic human connection. Perhaps the communities we built and the things we taught each other using our blogs, videos, and podcasts have been training us for this moment. As technology increasingly mediates our world, never forget the power of our communities. Hopefully, we’ll get on track and that can be a prediction for 2027.

The data center industry’s cooling crisis has a surprising champion: a century-old British lubricants manufacturer better known for what’s under the hood of your car than what’s inside your server rack. My recent conversation with Darren Burgess, PhD, Business Development Director for Castrol, and Solidigm’s Jeniece Wnorowski revealed how the company is leveraging decades of fluid expertise to address one of AI infrastructure’s most pressing challenges.
The transition to liquid cooling isn’t a preference but a necessity driven by the thermal characteristics of modern AI chipsets. As Darren explained, “The chipsets are just too hot to be cooled by air. Basically, if you want to try to do air cooling, the servers would need to be so separated you’d probably only get a couple of servers per rack in order to rush enough air through.” Given the premium on data center real estate and power, that approach obviously doesn’t scale.
Direct-to-chip cooling addresses this constraint by placing cold plates (essentially heat exchangers) directly against the chipsets generating the most heat. A propylene glycol solution circulates through these cold plates, drawing heat away from the processors and enabling the server densification that AI deployments demand. This approach allows operators to maximize their infrastructure investment while maintaining the vertical rack configurations they’re already familiar with.
What appears straightforward on the surface reveals significant complexity once you examine the chemistry involved. The coolant used is PG25, which contains 25% propylene glycol and 75% water. This immediately raises concerns about corrosion in systems featuring copper cold plates, iron components, and various other metals throughout the cooling distribution infrastructure.
This is where Castrol’s differentiation emerges. The critical factor isn’t the base fluid itself. “The additive pack is sort of the ‘magic dust,’ which is really where the action is,” Darren explained. It protects against corrosion while maintaining cooling efficiency. If corrosion occurs, particulates can clog the narrow channels within cold plates, some measuring as small as 50 to 100 microns. The result would be degraded cooling performance or complete system failure, forcing expensive downtime to remedy the problem.
Castrol’s expertise lies in formulating additive packages that prevent these failure modes, drawing on the same chemical engineering knowledge they’ve applied to automotive lubricants for decades. The qualification process requires demonstrating that cooling fluids won’t corrode system components, an essential step before any deployment.
Darren offered a compelling analogy to illustrate the cooling fluid’s critical role in data center infrastructure. While operators can deploy redundant cooling distribution units, backup pumps, and duplicate systems throughout the infrastructure stack, the cooling fluid itself represents a single point of failure. “It’s really like the blood,” he said. “We have two eyes, hands, and feet. We can get away with a lot of repeat, but we only have one system of blood. You really have to take care of it or you’ll shut the body down, or the data center down.”
This reality positions fluid health and maintenance as crucial operational concerns. Castrol’s value proposition extends beyond simply supplying cooling liquids to encompassing the entire lifecycle: proper installation to avoid contamination, ongoing maintenance to preserve fluid integrity, and eventual disposal when systems are decommissioned.
The liquid cooling market’s maturation has accelerated dramatically over the past year. Hyperscalers including Amazon Web Services, Microsoft, and Google have moved beyond pilot programs to large-scale deployments of direct-to-chip cooling. The conversation has shifted from questions about equipment availability to detailed technical discussions about additive packages, compatibility testing, and failure mode analysis.
This evolution reflects the industry’s growing confidence in liquid cooling as a proven technology rather than an experimental approach. Organizations are now focused on operational details: understanding how fluids interact with different materials, identifying potential points of failure before they occur in production, and standardizing deployment practices across the industry.
Castrol’s pivot from automotive lubricants to data center cooling solutions demonstrates how adjacent industry expertise can address emerging infrastructure challenges. Their experience formulating fluids for demanding thermal environments translates directly to the requirements of AI infrastructure.
As liquid cooling transitions from niche technology to mainstream deployment, organizations that partner with suppliers offering comprehensive lifecycle management, from installation through maintenance to disposal, will be better positioned to avoid costly downtime and operational disruptions. The cooling fluid may be invisible to end users, but it’s becoming as foundational to AI infrastructure as the silicon it protects.
For more information about Castrol’s data center cooling solutions, visit https://www.castrol.com/en/global/corporate/products/data-centre-and-it-cooling.html.

Machani Robotics sits at an interesting crossroads in the AI landscape: part deep-tech startup, part experiment in what happens when machines are designed to actually understand how people feel. As Chief Strategy Officer and CTO, Niv Sundaram is helping steer the company’s work on companion humanoids powered by emotionally intelligent AI—systems built not just to respond, but to relate.
Niv’s perspective on innovation comes from hard-won experience. Over a 15-year career at Intel, she rose to VP & GM, helped define AI instruction sets now used in generative AI, and rebuilt cloud provider relationships that were difficult customers into multi-billion-dollar partnerships. In this introductory Q&A for one of TechArena’s newest voices of innovation, Niv talks about what she’s learned along the way, why the coolest innovations are often the simplest, and how emotionally aware AI could reshape everything from healthcare to mental health support.
I liked breaking things as a kid, so naturally I got a PhD in Electrical Engineering to break things more systematically. I spent 15 years at Intel rising to VP & GM, where I got to work on AI instruction sets that now power generative AI and built their Cloud Engineering organization from scratch. We turned some very unhappy cloud providers into partners generating billions in revenue, which was definitely a fun learning experience. Now I'm at Machani Robotics as Chief Strategy Officer and CTO, building companion humanoids with emotionally intelligent AI. Turns out after years of optimizing machines, I wanted to build ones that actually understand humans. We are going for Vision energy, not Ultron.
Taking on Intel's Cloud Engineering when our relationships with major cloud providers were in crisis. I expected technical firefighting. Instead, I learned that the hardest problems are human ones—rebuilding broken trust, developing entirely new team capabilities, and accepting a fundamental truth: the customer is the point of the business. That experience taught me two lessons I carry everywhere: First: Empathy isn't soft—it's strategic. Understanding the customer's perspective isn't about being nice. It's about maximizing value by solving their problem, not the one you think they have. Second: Technical excellence without customer obsession is just an expensive science project. These lessons now define my work in cognitive AI for seniors. Here, understanding human emotion isn't a nice-to-have feature—it is the product. Every algorithm, every interaction, every design decision comes back to one question: Are we building what people actually need, or what we think is clever?
Early on, I thought innovation was about faster, smaller, better specs. Now I think it's about whether something actually helps people live better lives. If your innovation doesn't improve human wellbeing in a meaningful way, you're just making expensive toys.
Emotionally intelligent AI that can actually sense and respond to how you're feeling in real time. Everyone's obsessed with generative AI making text and images, but AI that understands when you're anxious, lonely, or struggling? That's going to transform healthcare, senior care, education, and mental health support. The companies building this responsibly now will define how humans and AI coexist.
Three questions: Does it solve a real problem? Can it scale without falling apart? Does it actually help people? Most “innovations” fail at least one of these tests.
Cool innovations don't have to be complex. It is super important to keep things simple. We are in a major hype cycle with AI, and it was cloud before. Our industry loves dramatic disruption, but sometimes the best innovation is making existing things work way better. Not everything needs to be a multiverse-level event.
Collaborators or helpers, but only if we’re intentional about it. AI is great at pattern recognition and scale. Humans have lived experience, emotion, and moral imagination. At our startup, we're building AI companions to support people, not replace human connection, and to celebrate what makes us uniquely human. AI doesn’t replace creativity—it expands the canvas. Machines will generate variations, ideas, and structure; humans will focus on meaning, narrative, and emotional resonance. The future is co-creative. AI is the brush, not the artist.
Deeper understanding of AI so we don't go into these overly dramatic conclusions that AI will replace us. Every new technology gets the “this will destroy humanity” treatment. AI is not Thanos. It’s a tool. Let’s use it wisely!
Nothing beats experience but a book I would always recommend to understand our industry is Chip War, by Chris Miller. It's a brilliant history lesson about Silicon Valley and it is mandatory reading for anyone that wants to join tech.
I whiteboard it and also talk to myself. That helps to zoom out, and clarity comes from making the abstract into reality. Having a pensieve would be useful too.
Creating comics celebrating women in technology. It's storytelling, art, and advocacy rolled together, and it is always a good reminder that it's on all of us to open doors for everyone.
I love the work that Allyson and your team are doing and connecting with people who care about where tech is actually heading, not just what’s trending. Here’s hoping that we all feel empowered to set impossible goals and achieve big dreams!
I’d choose Marie Curie and Ada Lovelace—two women who didn’t just contribute to their fields, but created entirely new ones. The original Avengers of science, if you will.
Madam Curie discovered polonium and radium and pioneered the science of radioactivity, opening doors that transformed physics, medicine, and our understanding of the universe. She made these breakthroughs while working in conditions that would break most people—improvised labs, limited support, and a world that constantly questioned her place in it. I’d ask her how she kept her sense of purpose alive when the path ahead was uncharted and the world around her wasn’t ready for her brilliance. Her courage wasn’t just scientific; it was profoundly human.
Ada Lovelace, our first computer programmer, looked at early mechanical computation and saw something no one else did: a machine capable of creating art, music, and ideas. Long before computers existed, she imagined a world where logic and creativity would merge—a vision that feels uncannily aligned with today’s emotionally intelligent AI. I’d ask her what she would think about machines learning to understand emotion, not just mathematics. I imagine she’d see it as a natural evolution of the symbiosis she predicted.
Both Curie and Lovelace stood at the very beginning of revolutions that reshaped humanity. They remind me that innovation isn’t just about invention—it’s about having the imagination to see beyond the possible and the courage to keep going even when no one else can see what you see.
Their stories remind us that the future is built by people who dare to believe in it first.