
Anusha Nerella, financial industry leader and Forbes Tech Council member leader, explores AI-driven FinTech infrastructure — scalability, governance and agentic computing. Interested in finding out more about the AI Infra Summit and seeing Anusha Nerella live? Find out more here.

This summer, we’re checking back in on our TechArena predictions for 2025 to see how they are holding up. We’re starting with Vernon Turner’s environmental, social, and governance (ESG) predictions for multinational corporations (MNCs). Based on his performance, so far, we’re giving our predictions high marks.
Status: CONFIRMED
Turner warned about regulatory confusion stifling ESG software investment, innovation, and strategic flexibility, and it seems to have happened. In a survey of 125 large MNCs, 80% of respondents reported they were adjusting their ESG strategies in 2025 and 75% said they expected the shifts would “slow down” decarbonization efforts.
Status: ACCELERATING
New projections show that this prediction is right on track. AI adoption in ESG is now expected to grow at 28.2% compound annual growth rate through 2034. Companies are still facing regulatory frameworks demanding ESG data transparency and compliance, and AI offers a tantalizing path for automating data collection and reporting.
Status: IN PROGRESS
While the US regulatory scene is chaotic, MNCs answer to other regulators as well. The EU is working to rewrite its Corporate Sustainability Reporting Directive to make it less burdensome, but companies still need to prepare for reporting requirements in some form. That makes the hunt for ESG data scientists and reporting solutions urgent.
While regulatory chaos creates uncertainty, it’s also creating real opportunities in ESG tech infrastructure. Companies need robust, automated systems more than ever to navigate this fragmented landscape. We expect to see more tech investments in this area, even as ESG investment overall slows down, as we head into the back half of 2025.

The glowing green letters on the black screen seemed almost mystical to little Allyson Klein when she spotted her first Apple computer at her friend’s house in the 1970s.
Growing up in Silicon Valley during the semiconductor boom, Allyson’s house was already a playground for the emerging digital age, with every gaming console imaginable having found its way to their living room, courtesy of her father's work with the toy industry.
“At the beginning, there was this weird blend where gaming and computing were introducing electronics into the household,” she recalled. “My first computer was a Commodore 64, and with it, you had to learn BASIC because not much software had been written yet.”
For someone who would spend decades of her career translating complex technology into human stories, Allyson's early experiences with Atari and other gaming consoles shaped her appreciation for a world where silicon and software would reshape everything.
As founder and principal of TechArena, Allyson now leads a tech marketing agency and media platform that's carved out a unique space in the tech media landscape. But her path from Silicon Valley kid to tech industry storyteller wasn't linear; it curved its way through 22 years at Intel, a stint leading global marketing and communications at Micron, and pioneering technology podcasts along the way.
Allyson's father, an international marketing executive, brought marketing strategy discussions to the dinner table each night. Her mother worked as a nurse for semiconductor companies, bringing home stories about the intricate chemical processes used to create computer chips and what they could do to the human body.
“I developed a fascination with marketing strategy and with the process of semiconductor creation at a very young age,” she said.
The University of Oregon gave Allyson a foundation in marketing, management, and international studies – a combination that reflected her father's influence and her own intuitive understanding that technology's real power lay in its human applications. But like many business students, she found the theoretical aspects less compelling than the real-world applications she'd encounter later.
It wasn't until she was working on her MBA in Portland, surrounded by Intel employees sharing stories of their work, that she realized she wanted to be more than an observer of the tech revolution.
“Their stories about foundational work and industries being reshaped by technology convinced me I wanted to be part of that,” she said.
Allyson's Intel career began in the late 1990s, during one of the most transformative periods in computing history.
One of her most significant mentors early in her career was Jim Pappas, who was instrumental in creating USB, PCI, and countless other industry standards. Pappas didn't just teach Allyson about technology – he showed her how foundational innovations could spark creativity across entire global ecosystems.
Allyson found herself at the center of what she now calls one of the most disruptive forces in tech over the last 20 years: the rise of industry-standard data centers and the creation of cloud computing.
“We take what that technology has done for granted, but it really transformed the world,” she said. The pandemic would later prove her point dramatically – with cloud infrastructure keeping children in school, delivering products to doorsteps, and providing the connectivity tools that held society together during lockdown.
Allyson viewed her role at Intel as much more than marketing individual products; she was helping build ecosystems – creating initiatives that brought together companies delivering complementary technologies, crafting foundational messaging that would shape entire industry narratives, and telling stories that would help customers understand what Intel was building and why it mattered.
One of her most successful projects was the Open Data Center Alliance, formed with enterprise technology leaders from around the globe to document their requirements for cloud computing.
“What that taught me was a deep respect for what it takes to run IT operations,” she said. “Understanding the challenges they face day-to-day and what they think about in terms of workloads and workflows across very complex enterprise environments.”
In 2009, Allyson’s boss approached her with a challenge.
“There's this new thing called social media,” he said. “Go figure it out.”
Allyson researched and worked with external agencies to come up with recommendations for how Intel would engage in social media, and one of her two resulting actions was to start a podcast.
Allyson had an insight that would prove prescient: the best conversations about technology weren't happening in conference rooms or marketing presentations – they were happening in Intel's cafeterias, where she would sit for hours talking with engineers, asking them to explain their latest innovations.
Chip Chat launched as a weekly show and would eventually run for 754 episodes, reaching over 20 million listeners and winning numerous industry awards. The podcast gave Allyson “a profound appreciation for the role of inquiry in driving narrative.”
“People love to talk about what they've done, but they need that prompt to give them permission to share,” she noted.
After Intel, Allyson took on the role of leading global marketing and communications at Micron, the world's fourth-largest semiconductor manufacturer. The position offered her first opportunity to oversee corporate and internal communications, managing everything from COVID-19 messaging to responses to the Black Lives Matter movement to technology evolution and the CHIPS Act.
But by 2022, Klein found herself at a crossroads. She had proven she could own the message inside major corporations, but something was missing.
“I missed creating content. I missed telling stories,” she said. “I wasn't getting the opportunity to take pen to paper or sit in front of a mic anymore, and those things gave me joy.”
The idea for TechArena emerged from Allyson's realization that she might be more inspired working as strategic counsel across multiple companies rather than owning the narrative within a single organization. But it was also born from her unique perspective on an industry she'd lived inside for decades.
“Most tech journalists don't have the background of living inside companies,” she said. “At TechArena, we understand the shorthand and what might actually be going on because we've lived in that environment so long.”
TechArena launched as both a content platform and an agency, allowing Allyson to demonstrate her team's “mad skills” while building a business around strategic marketing counsel. The platform has featured more than $9 trillion in market cap worth of companies, as well as 84 founders and CEOs of small tech startups who've shared their stories.
Allyson’s approach to content differs markedly from traditional tech journalism. Every piece includes the “TechArena take” – an opinion based on insider knowledge. The writing style is deliberately less formal than typical industry publications because, as she puts it, “people are human and they want to enjoy the content they're consuming.”
Perhaps Allyson's most unconventional belief is her optimism about technology's impact on human jobs. While many worry about AI replacing human workers, Allyson draws on her experience with previous technological disruptions.
“When cloud computing and virtualization emerged, we thought consolidating workloads 20-to-1 would collapse the server market,” she recalled. “We worried about this constantly at Intel.”
Instead, more applications were built, more uses for technology emerged, and IT departments only grew larger.
What emerges from Allyson's story is a career built on a fundamental insight: technology's real power lies not in its technical specifications, but in its human applications. Whether creating narratives at Intel, such as, “We move, store, and process the world's data,” building ecosystems across the industry, leading massive global organizations, or launching podcasts that gave engineers permission to share their passion, she has consistently focused on the human element in technological advancement.
“The center of any marketing and communications program is the message and the audience,” she said when asked about her unique ability to work across both disciplines. “Understanding the unique challenges each field solves with that message and audience defines both their synergies and differences.”
As Allyson looks to the future of TechArena, her vision remains rooted in this human-centric approach. She envisions a team creating content with multiple voices, richer client collaborations, and a brand with deep meaning to its audience.
“I'm more fascinated with technology today than I've ever been,” she said, describing recent interviews on agentic AI's role in silicon development and AI-driven simulation for 5G and 6G antenna testing. “The geekier it gets, the more excited I become.”
For someone who started with gaming consoles in her childhood living room, Allyson still finds wonder at the intersection of human creativity and technological possibility – committed to telling the stories that help the rest of us understand why innovation matters.

The legal industry saw another major milestone in its AI transformation journey this week as Clio, a legal practice management company, announced its acquisition of AI-powered legal research platform vLex for $1 billion in cash and stock. The deal promises to bring together technologies spanning the management of law firms and the practice of law into a single, unified platform.
“Through this acquisition we are laying the foundation for the first and only cloud-based, AI-powered platform that seamlessly connects the business and practice of law,” Clio’s CEO and Founder Jack Newton said in a blog post announcing the acquisition. “It’s a moment that reflects not only the scale of what we’re building, but the scale of what’s possible and represents a bold step toward building a new category of legal technology.”
The scale of the deal is reflected not only in dollars, but in the reach of the two companies’ systems. Clio’s practice management software is used by over 200,000 law firms worldwide; vLex’s global legal intelligence platform, known for its built-in AI assistant Vincent and propriety database including more than a billion legal documents, serves more than 2.8 million registered users.
“With the most comprehensive global legal library and firm insights, Clio and vLex are uniquely positioned to reshape the mechanics of legal work and redefine the trajectory of the profession,” CEO and Co-Founder of vLex Lluis Faus said.
The acquisition announcement comes rapidly after another strategic alliance recently announced between Harvey AI, which offers a ChatGPT-based AI platform for tasks like legal research and contract analysis, and LexisNexis, which is one of the main competitors to vLex. In that case, the two companies announced that LexisNexis database and AI capabilities would be integrated into Harvey to create new workflows and “a powerful new experience for Harvey customers.”
Clio’s acquisition of vLex is subject to standard regulatory approvals. More information about the capabilities the combined platforms may offer is planned to be shared at ClioCon this October in Boston.
These partnerships reflect the legal industry’s rapid transformation as AI reshapes how lawyers conduct research, draft documents, and manage cases and their practices. They also point to an urgent necessity: an arms race for the best data in the domain.
Great data is crucial to building the best AI models and platforms. In an industry where model performance can be the difference between freedom and liberty, or which way a billion-dollar judgment goes, putting a price tag on “the best” is an expensive prospect.
Two of the three dominant sources of data that currently exist — vLex, LexisNexis, and a legal database owned by Thomson Reuters — have now been claimed. What remains to be seen is what will happen to the legal tech service providers who haven’t been able to get in on this gold rush.
We’ll be watching to see where Thomson Reuters lands. We’ll also be watching for consolidation waves in other industries with similarly concentrated, high-value data repositories. We suspect the scrambles to control the information that powers AI systems have only just begun.

After nearly 18 months of regulatory friction, Hewlett Packard Enterprise (HPE) has cleared the final hurdle in its $14 billion acquisition of Juniper Networks. On June 28, 2025, the U.S. Department of Justice announced it had reached a settlement with HPE, ending an antitrust challenge that had threatened to derail the high-stakes deal just weeks before trial.
The settlement, marked by two key concessions, removes a major overhang on the merger and allows HPE to move forward with a strategic acquisition designed to strengthen its position in the AI-native networking era.
As enterprises re-architect for AI workloads, the network has moved from a supporting role to a foundational pillar of modern infrastructure. The HPE–Juniper combination reflects this shift – where intelligence, performance, and adaptability must now live inside the network itself.
To address competitive concerns, HPE will divest its Instant On campus and branch WLAN business, including all IP, R&D, and customer relationships. The company will have 180 days to identify and secure a DOJ-approved buyer.
Additionally, the parties agreed to a technology licensing requirement: Juniper’s AI Ops for Mist source code – critical to its WLAN optimization and automation – must be made available through a non-exclusive, perpetual license via a competitive auction. This includes optional transitional support and even personnel transfers to jump-start competition.
These terms are designed to preserve competitive dynamics in enterprise wireless networking while still allowing HPE to advance the merger.
HPE CEO Antonio Neri has long framed the Juniper acquisition as more than just consolidation. In his words, the combined entity offers a “modern network architecture alternative” that’s optimized for AI workloads across cloud, enterprise, and service provider environments.
By bringing Juniper’s AI Ops capabilities, Mist AI portfolio, and networking silicon innovation under HPE’s umbrella, alongside Aruba’s access and security tools, the company is building what it sees as a differentiated, software-defined stack for the next wave of infrastructure.
With AI workloads reshaping data flows, automation, and telemetry requirements across all network layers, HPE’s endgame is to own the central nervous system of modern infrastructure. The network is no longer the highway for data – it’s the platform where real-time insight, optimization, and control happen. In this vision, the network isn't adjacent to AI infrastructure; it is AI infrastructure.
Channel partners have called the deal a “game changer” for HPE, particularly with its expanded reach across cloud and telco verticals. However, some observers remain skeptical of the DOJ’s assertion that the combined HPE-Juniper entity and Cisco would control over 70% of the U.S. WLAN market – a figure that may not fully account for disruptive players and emerging architectures in this fast-moving sector.
The DOJ’s settlement may have closed the chapter on legal opposition, but the real test is what HPE does next. With regulatory clearance in hand, the company now carries both the strategic potential and the execution burden of realizing its AI-native networking vision.
The divestiture of Instant On and the licensing of Juniper’s Mist AI source code underscore a deeper truth: AI-era infrastructure will be won not just by owning the stack, but by how open, adaptable, and customer-centric that stack proves to be.
If HPE delivers on its promise of real-time automation, silicon-level optimization, and intelligent edge-to-core integration, all without locking customers into a monolithic experience, it has a chance to challenge Cisco and reset the expectations for what enterprise networking should be.
The next moves – from roadmap integrations to go-to-market strategy – will determine whether this is just a large deal or a defining moment in the re-centering of the network as AI’s operating backbone.

With all of the buzz from the business world about how AI will affect how we work and live, it can be hard to remember the truly mind-blowing variety of potential applications for the technology. This week, Google DeepMind (whose AlphaFold won the 2024 Nobel Prize in Chemistry) broke through the noise with a stunning reminder when they unveiled AlphaGenome.
More than 20 years ago, the Human Genome Project succeeded in sequencing the 3.1 billion genetic letters that provide the DNA instructions to create a human being. But the end of that journey (like many scientific journeys) offered more questions than answers. Chief among them: What does 98% of our DNA actually do?
About 2% of our DNA was determined to be dedicated to making proteins. The rest, “non-coding,” DNA was dubbed by some to be “junk” and by those more curious, to be “dark matter” and the new frontier of genetic exploration.
Over time, scientists have come to understand that these non-coding sequences still affect protein activity. They determine whether or not certain genes are expressed (a process called regulation), which can affect if a gene that commonly drives cancer or heart disease, for example, is turned “on.” But no single DNA stretch has only one job, either. So understanding the effect of a mutation in a stretch of DNA remains an incredibly complex problem.
Two decades later, enter AlphaGenome: an AI model that predicts how genetic mutations affect gene regulation across this non-coding DNA. The tool processes up to one million DNA letters simultaneously and scores variant effects within seconds. Its model architecture is a hybrid neural network leveraging a mix of convolutional layers and transformers.
The AI model achieves state-of-the-art performance, outperforming specialized tools on 22 of 24 sequence evaluations. Unlike previous models that could handle long sequences but weren’t sensitive to single-letter changes in those sequences, AlphaGenome maintains this level of precision across the long DNA stretches.
“It’s a milestone for the field,” said Dr. Caleb Lareau of Memorial Sloan Kettering Cancer Center. “For the first time, we have a single model that unifies long-range context, base-level precision, and state-of-the-art performance across a whole spectrum of genomic tasks.”
AlphaGenome could help researchers pinpoint disease causes more precisely, guide the design of synthetic DNA for specific regulatory functions, and accelerate genome understanding by mapping crucial functional elements. Its applications today are in research, not personal genome prediction for individuals.
AlphaGenome launches through a non-commercial API with plans for full model release and commercial licensing.
While headlines continue to focus on AI’s disruption of industries and job markets, AlphaGenome represents something more profound: AI’s potential to help us unlock enduring mysteries. This isn’t about automating spreadsheets or generating marketing copy. As we improve our understanding of what 98% of our DNA actually does, we’re not just advancing medicine or research. We’re fundamentally expanding our comprehension of life. Google DeepMind’s announcement this week reminded us that AI’s most transformative power may not lie in reshaping how we work, but in revealing who we are.

In case you missed it, AI's impact on workforce reductions arrived in force last week. An open letter from Andy Jassy declared that future layoffs will re-shape Amazon's workforce and re-tool skill sets of human workers to better complement AI productivity. At Microsoft, this re-shape has already begun as the co-pilot is taking over controls of one of the world's largest companies. According to the World Economic Fund, 41% of employers worldwide intend to reduce their workforce because of AI in the next five years across industries, so these latest tech moves are expected if not perhaps arriving earlier than some foresaw. This bullish perspective is rampant in boardrooms despite the less than stellar productivity gains realized by generative AI to date. According to the Federal Reserve, gen AI has increased US productivity by an anemic 1.1% in an economy that is arguably ahead of the adoption curve. And while function level reviews show more rapid advancement, most notably a 14% increase in productivity in customer support, with a notable 35% increase in entry-level roles within the field, according to the National Bureau of Economic Research, other fields are falling...flat, leaving many to query if gen AI, at least in the 2025 era, is CEO weak sauce for workforce reduction actions.
Enter Intel, who Friday announced that they would outsource their marketing functions to Accenture to run with, yes, you guessed it, generative AI. The news caught wildfire across the marketing landscape for its bold prognostication that the marketing practice is ready for handoff holistically. It caught my attention because I know the size, scale, and complexity of the historic Intel marketing engine well, having overseen the global data center and edge marketing practices at the company. While no one is going to argue that Intel's marketing is anything but a shadow of what it once was, the company did sell over 250 million estimated CPUs in 2024 and is still a global presence operating manufacturing sites across the world. They engage in many market segments with a complex partner ecosystem across hardware, software, and service providers. Even with a falling market position, the marketing of microprocessors is not for the faint of heart, with complex value propositions, better together narratives, and sales and support motions to deliver.
So, what to make of this move? Ironically, this week, TechArena started unpacking our Who's Who in the AI Zoo article series, discussing the current state of classes of AI toolsets from a marketing perspective. I explained in my LinkedIn post introducing the initial article that we've leaned into gen AI for everything we do as a marketing organization since day one. And while our productivity is amazing with these tools, we are still very much in an era where AI is an accelerant to marketing delivery akin to starting your day with a jumbo-sized Red Bull. I've been delighted by how this has especially supported less tenured staff and how it's opened the door for more efficient delivery of services to our clients.
Where are the gaps? Having worked in engineering cultures for the bulk of my career, I'm very aware of non-marketing professionals thinking that marketing is simply easy. After all, it's just words and pictures, right? While the landscape is changing every day, savvy human marketers are still uniquely poised to craft strategy, to choose messaging and narrative, and to nurture relationships across complex industries. And while Intel and Accenture may very well shock the world with outstanding marketing results, I'm viewing this latest move as yet another signal that Intel is waving a white flag for any quick return to industry leadership, and the marketing move is a rounding error towards Lip-Bu Tan's targeted 20%+ layoffs at the company.
The debate over the future of work will only heat up in the coming months as more companies make bold bets on gen AI integration into the workforce, and, at TechArena, we will be diving deep into the conversation across business functions and, of course, the leading edge of tech innovation.

Today, AI-native data platform company WEKA launched NeuralMesh, a software-defined storage system designed to power AI applications requiring real-time responses. The mesh-based architecture delivers performance for AI workloads and scales from petabytes to exabytes while becoming more resilient as it grows.
WEKA built NeuralMesh specifically for enterprise AI and agentic AI systems that demand data access in microseconds, not milliseconds. It offers a fully containerized, mesh-based architecture to seamlessly connect data, storage, compute, and AI services.
“AI innovation continues to evolve at a blistering pace,” says Liran Zvibel, WEKA’s cofounder and CEO. “Across our customer base, we are seeing petascale customer environments growing to exabyte scale at an incomprehensible rate. The future is exascale. Regardless of where you are in your AI journey today, your data architecture must be able to adapt and scale to support this inevitability or risk falling behind.”
Traditional storage architectures slow down and become fragile as AI environments expand, forcing organizations to add costly compute and memory resources to keep up with the demands of their AI workloads. In contrast, NeuralMesh’s software-defined microservices-based architecture allows its performance to improve rather than degrade as data volumes reach exabyte scale. The company says it is the “world’s only intelligent, adaptive storage system” purpose-built for accelerating graphical processing units, tensor processing units, and AI workloads.

WEKA highlights five key breakthrough capabilities delivered by NeuralMesh:
The solution offers benefits for AI companies looking to train models faster and deploy agents that respond quickly, hypserscale and neocloud service providers looking to expand service to more customers without also expanding infrastructure, and for enterprises that need to deploy and scale AI-ready infrastructure without overwhelming complexity.
NeuralMesh represents over a decade of development supported by more than 140 patents. WEKA hails it as a “revolutionary leap” in its data infrastructure solutions, which began with offering a parallel file system for high-performance computing and machine to and evolved to pioneering the high-performance AI data platform category in 2021.
NeuralMesh launches in limited release for enterprise and large-scale AI deployments, with general availability planned for fall 2025.
Following closely on last week’s announcement between WEKA and Nebius to deliver a GPU-as-a-Service (GPUaaS) platform, this news further demonstrates how WEKA’s commitment to innovation is solidifying its position as the high-performance storage layer for AI.
The scale here, with microsecond response times and exabyte-scale capability, is remarkable—and necessary. NeuralMesh positions enterprises to harness the full potential of agentic AI systems where split-second decisions drive competitive advantage.
With its existing product, customers have already claimed impressive results. A case study with Stability AI highlighted that by using WEKA, Stability AI achieved 93% GPU utilization during AI model training and increased their cloud storage capacity while reducing costs by 80%. We’ll be watching to see the results NeuralMesh can drive as a technological leap forward.

Antillion is a UK-based technology company focused on edge outcomes that brings together diverse talent to deliver innovative, user-centric solutions through collaborative partnerships with customers.
The company emphasizes research and development, maintaining cutting-edge capabilities at their London-area facility equipped with CNC machines, 3D printers, and quality assurance systems, while their team of engineers and system designers create visually stunning, functional products that blend modern design with advanced technology. Antillion's mission centers on simplifying complexity and addressing genuine user needs through products that provide immediate positive experiences, supported by comprehensive training, expert on-site assistance, and a commitment to exceptional craftsmanship that drives both trust and user satisfaction.
We caught up with Alistair Bradbrook, Antillion's founder and COO, to learn how they utilize storage technologies to bring tactical data center performance to the most challenging edge environments.
At Antillion, we build for customers operating where real-world demands push technology to its limits. These are environments that are harsh, unpredictable, and often disconnected — where you can’t count on infrastructure, power, or even a network signal.
The challenges? They’re complex and relentless.
First, there’s the environment itself. We’re talking about kit that needs to survive inside armoured vehicles filled with dust or bolted to poles in sub-zero Arctic winds. These aren’t lab conditions — this is real-world brutality where traditional IT just can’t cope.
Then there’s SWaP – size, weight, and power. We design platforms that can be carried in a rucksack or strapped inside a vehicle. Performance is still expected — but within incredibly tight physical and power constraints. That’s a huge design challenge.
Connectivity’s another big one. Our customers often operate in degraded or disconnected networks. There’s no cloud to rely on. So, everything — from data ingestion to processing and decision-making — needs to happen at the point of contact. Locally. Instantly.
The operational tempo is relentless too. When you’re in a mission-critical situation, latency isn’t an inconvenience — it’s a risk. Insight has to be real-time. You’re talking about sensor data flowing into compute, through analysis, and into an actionable decision — in seconds or less.
And lastly, usability under pressure. These systems aren’t being deployed by sysadmins — it could be soldiers, field engineers, or emergency responders. There’s no time for training manuals. It just needs to work — fast, reliably, and intuitively. That’s what we focus on delivering.
We’ve designed our PACE platforms from the ground up to bring data centre-grade capability to the edge — without compromising on performance, durability, or usability.
For us, it always starts with form and function. We don’t design for static data centre environments — we design for rooftops, vehicle interiors, trenches, and backpacks. Half-width, short-depth, modular hardware that’s built to fit real-world environments. That’s the basis of the PACE design language.
On the compute side, we’re integrating serious horsepower. We’re talking AMD EPYC, Intel Xeon, and even ARM — plus accelerators from NVIDIA and others — packed into ruggedized, sealed, and IP-rated platforms that can be mounted in a vehicle or carried by hand. These systems aren’t just tough — they’re powerful.
Storage is a big part of the picture, too. Solidigm’s EDSFF SSDs have been a game-changer for us. We now offer systems that support the 122TB D5-P5336 from Solidigm — and they’re holding up brilliantly in high-vibration, high-temperature scenarios. Whether it’s a wearable system or something mounted, we can keep massive volumes of data local, reliable, and fast.
Then there’s the design philosophy. We obsess over usability — intuitive deployment, straightforward servicing, and no-nonsense operation. These systems have to work for people in the field, under pressure. The goal is always the same: the tech disappears, the mission stays front and centre.
Whether it’s the ultra-portable PACE AIR or the fully ruggedized PACE FRONTIER, we’re not just making edge compute possible — we’re making it powerful, deployable, and trusted in the toughest environments.
It’s hard to overstate the impact high-density SSDs — especially Solidigm’s 15.36TB, E1.S and 122TB, E1.L — have had on what we can achieve at the edge.
Before, storage was a compromise. If you wanted a compact system, you had to accept limited capacity. Not anymore. Now, even our smallest PACE units — like the A211 — can handle massive mission datasets: multi-stream 4K ISR, full platform telemetry, and raw AI training data. And they can do it right where the data’s generated.
The NVMe performance is a huge enabler. We’re not waiting for data to move — we’re running real-time analytics, AI inference, and sensor fusion right there in the field. That’s crucial when you’re working in denied or degraded networks where the cloud just isn’t an option.
The efficiency is game-changing too — more capacity per watt, per millimetre, and per kilogram. That means smaller, lighter platforms that don’t sacrifice on performance or endurance — exactly what’s needed in vehicle, wearable, or airborne deployments.
From a reliability standpoint, Solidigm’s been rock solid. Across hundreds of deployed drives in some truly hostile environments, we’ve seen zero failures. That kind of trust is critical in military and security deployments — we don’t get second chances in the field.
Fewer drives mean fewer cables, faster builds, and simpler logistics. For our customers, that translates directly into reduced operational burden, easier maintenance, and faster time to deployment.
To put it simply: these drives are what let us bring data centre-class storage to the edge — and make it rugged, mobile, and mission-ready.
We don’t build traditional data centres — our philosophy is all about disaggregation and decentralisation. We’re taking compute out into the world, wherever the mission demands it. High-capacity SSDs are critical to making that model both sustainable and efficient.
For starters, SSDs give us a much better performance-per-watt ratio than spinning discs. That means more compute and more storage for less power — which is essential when you're relying on batteries or field generators. In remote, mobile deployments, every watt counts.
They also run cooler. That sounds simple, but it makes a huge difference in our sealed, rugged systems like PACE Frontier. We don’t have the luxury of big fans or data centre HVAC — SSDs let us keep things thermally efficient without adding complexity or extra energy overhead.
Another big factor is data movement — or rather, the lack of it. Because we can store and process petabytes locally on the edge, we’re not constantly pushing data back to central infrastructure. That dramatically reduces energy consumption, especially across constrained or expensive networks.
There’s also the sustainability of the platforms themselves. SSD durability helps extend hardware life. Combine that with our Evergreen program — where we upgrade and refresh existing systems instead of replacing them — and you’re looking at a far longer lifecycle. That means less waste, fewer shipments, and a smaller overall footprint.
It’s not just about energy efficiency — it’s about operational sustainability. We’re building systems that last longer, use less, and deliver more — wherever they’re deployed.
QLC drives have been a game-changer for what our customers can achieve in the field. They’ve opened up entirely new mission profiles — especially for defense, security, and industrial applications — by enabling us to deliver massive storage and lightning-fast performance in incredibly compact, rugged formats.
We’re now running AI and analytics right on the edge, on systems the size of a lunchbox. Clients are deploying models for things like object detection, anomaly spotting, and pattern recognition — and doing it in real time, exactly where the data’s being generated. There’s no need to wait for upload or connectivity — the insight happens there and then.
That’s crucial in DIL environments (Disconnected, Intermittent, or Limited networks). With these QLC drives, the data stays local and accessible even when comms are down. It’s not just about speed; it’s about continuity and control. For our customers, that kind of autonomy is often mission-critical.
What’s more, QLC drive density means we can scale up without scaling out. Using Solidigm’s E1.S or E1.L modules, our customers can multiply their storage without changing the physical footprint — same chassis, same power draw, just more capability. That’s especially important when size and weight are tightly constrained.
This tech also helps us move faster. Build and provisioning times are down by up to 30%, which gets systems into the field quicker. In operational terms, that can be the difference between acting now and reacting too late.
And perhaps most exciting — we’re enabling entirely new types of missions. Cybersecurity at the edge, autonomous platforms, predictive maintenance using AI — these just weren’t feasible before. Now they are, thanks to the performance and resilience these QLC drives bring.
As we expand the PACE portfolio with the latest high-core-count processors, greater memory, and high-capacity Solidigm storage, we’re developing more powerful and mission-specific platforms for the far edge. Each one is true to our design-first ethos and built to deliver more compute, more capability, and more outcomes wherever the mission takes them.

We sat down with David Lim, senior director of marketing for Hypertec, to learn more about infrastructure that is purpose-built for AI and HPC workloads.
Founded in 1984, Hypertec is an award-winning global technology provider offering a wide range of cutting-edge products and services with a strong emphasis on sustainability. Trusted by industry leaders, Hypertec serves clients in over 80 countries worldwide. The company has earned international recognition for its sustainability leadership and innovative manufacturing practices.
At the beginning of this collaboration is a pretty simple idea: the demands on data centers are growing fast, from AI training and real-time analytics to ultra-low latency use cases like streaming and remote surgery. So we teamed up with Solidigm to show how you can handle that kind of pressure with infrastructure that’s built for it.
We're bringing our immersion-born TRIDENT servers, which are designed from day one to run submerged in liquid for better cooling and higher density. Paired with Solidigm’s SSDs, we’re showing what it looks like when compute and data access move faster, run cooler, and scale smarter. whether it’s a massive AI cluster or it’s deploying compute in a hospital or telecom edge site.
The magic really happens when you combine immersion cooling with fast, reliable storage. Immersion lets us push performance limits. We’re running CPUs and GPUs at peak power without worrying about throttling or overheating. That’s critical for workloads that don’t stop like training large AI models or running inference in real time at the edge.
Now, add Solidigm’s SSDs, and you’ve got the speed to feed those compute engines. Whether it's a 4K video being streamed from a CDN node, or a radiology scan being pulled up instantly in a hospital, fast I/O makes all the difference. The system doesn’t just run, it flies, and it keeps doing so consistently under load.
Data centers are under immense pressure as AI and high-performance computing (HPC) workloads grow exponentially. These applications demand significant computational power, leading to increased energy consumption and heat generation. Traditional air-cooling methods are struggling to keep up, especially as power densities rise. For instance, average power densities have more than doubled in just two years, reaching 17 kilowatts (kW) per rack, and are expected to rise to as high as 30 kW by 2027.
Moreover, the massive data volumes processed by AI applications require storage solutions that can handle high throughput with low latency. Traditional storage systems often become bottlenecks, hindering overall system performance. Additionally, the increasing energy consumption raises concerns about the environmental impact of data centers. Projections suggest that data centers could consume up to 9% of the United States' electricity by 2030, more than twice their current usage.
To address these challenges, Hypertec and Solidigm have collaborated to develop integrated solutions. The more efficient heat dissipation allows higher power densities, enabling more compute resources in the same physical space while reducing reliance on traditional air-cooling systems. Solidigm's SSDs are designed for high throughput and low latency, addressing data bottlenecks in AI applications. Their high-capacity SSDs enable data centers to reduce the number of physical drives, decrease footprint, reduce power consumption, and simplify maintenance. Together, these technologies offer a scalable, energy-efficient, and high-performance infrastructure solution tailored for the demands of modern AI and HPC workloads.
It’s one thing to build fast infrastructure, it’s another to build smart, efficient infrastructure. That’s where we’re focused. Immersion cooling is incredibly efficient by removing air-cooling, we cut out a huge portion of the power bill. And Solidigm’s SSDs pull their weight too, with lower power draw and high capacity, so we can do more with fewer drives.
The result? You get the performance you’re looking for with lower carbon cost. And for customers in healthcare, finance, telecom who are all under pressure to hit sustainability goals this isn’t a nice-to-have: it’s table stakes.
This is just the beginning. We’re already aligning today around what’s next: support for Gen5 and CXL, AI at the edge, liquid-cooled storage all the building blocks of future-ready infrastructure.
Think about what’s coming: AI models that run in real time on the edge of a 5G network. Robotic surgeries assisted by AI, where latency is measured in milliseconds. High-frequency trading platforms that need zero delay. These are not sci-fi anymore, they’re live today. And we’re building the compute backbone that makes them possible, scalable, and sustainable.

With over $500 billion forecasted in the AI accelerator market, all eyes were focused in San Jose this week to hear from Lisa Su and her team on how AMD was progressing on its strategy to take the leadership mantle for AI infrastructure delivery.
The theme of the day? Open Innovation. With a keynote filled with heady announcements, the throughline weaved across discussions showcased that AMD fully intends to be both the partner and customer choice for collaborative innovation as the world’s data centers pivot to broadscale deployments of generative and agentic AI at scale. And while many feel that competitive NVIDIA platforms have a stronghold grip on the market, one thing I kept thinking about as Lisa discussed the commanding progress of her team is that she knows more than a little about relentless pursuit of a Goliath and is pretty good at playing the giant slayer. Let’s break down the news of the day.
Data center-scale performance will define success with AI system proliferation, and AMD is the only company offering a full suite of CPU, GPU, and network solutions needed to deliver AI clusters. Today, AMD delivered its Instinct MI355 to the market, the 4th generation architecture built on 3nm process and packing 185 billion transistors. What does that mean for real-world performance? With up to 35X gen-over-gen performance, AMD has closed a lot of the gap vs. NVIDIA B200, delivering an average of 3.5X performance uplift across training and inference. Some examples of competitive head-to-head metrics showed performance parity with different Llama configurations running pre-training workloads and 1.1X+ performance in equivalent fine-tuning environments. While it’s too early and frankly naïve to declare AMD the performance winner, these gains certainly will open doors to deeper collaboration opportunities with customers with this generation of products.
Deeper customer collaborations are essential for the enormous sunshine of what comes next: Helios. In 2026, AMD will unify their entire silicon portfolio to deliver Helios, an AI-optimized rack-scale system that integrates next-generation Venice EPYC processors, next-generation Instinct MI400 GPUs, and next-generation Pensando Vulcano DPUs, tapping UAL and Ultra Ethernet connectivity. This behemoth will be delivered as an OCP-compliant, open-standards based solution, which is expected to turn a lot of heads given claimed performance parity with NVIDIA Vera Rubin across GPU domain, scale-up bandwidth, FP4/FP8 FLOPS, and a 1.5x advantage in HBM4 memory capacity, memory bandwidth, and scale-out networking. At the heart of this configuration, of course, is the upcoming Instinct MI400 GPU, which did a bit of stage stealing from the MI355 introduction today as Lisa claimed a targeted 10X performance improvement gen over gen. Even in the era of 2x Moore’s Law progression, this is a stunning performance improvement target that places the competition on its heels.
Powering the AI software stack behind Helios is ROCm, AMD’s open compute platform, which provides the common programming foundation across the Instinct GPU lineup. With support for major AI frameworks and optimized libraries, ROCm enables portability, scale, and high performance across AMD hardware – cementing AMD’s strategy of combining open software with open systems to meet enterprise and hyperscaler needs.
While the Instinct MI355 introduction and promise of Helios commanded attention, the message of open innovation was delivered in every word from AMD and partner executives, reflecting a strong customer desire for choice in AI solution alternatives. This open innovation starts with advancement of the ROCm software platform, AMD’s alternative to CUDA with ROCm 7 introduced today. Historically, software has not been AMD’s strong suit, and it’s obvious that the company is investing to change this through internal development and acquisition of talent including a high-profile addition of Lamini, who was on hand to share how a full suite of developer training will soon be delivered to accelerate developer activation of the software in AI development.
Open was also supported through AMD’s strong commitment to the Open Compute Project Foundation and leadership within the Universal Accelerator Link (UAL) and Ultra Ethernet standards efforts, with Ultra Ethernet hitting 1.0 this week. This commitment to standards-based networking and significant keynote time delivered to standards-based advancement underscored the importance of network scale to AI compute delivery as well as a key differentiator from competition who relies on proprietary solutions. It was fantastic to see the industry support of Astera Labs and Marvell highlighted as leading examples of a vibrant networking industry assembling to deliver the next generation of AI data center connectivity, and I expect to see a lot more about solution delivery from a host of vendors in the coming year as standards mature and customers begin deploying these standards based solutions.
AMD placed a central spotlight on developers in attendance and its developer sessions as it continues to advance ROCm. To help tell this story, leaders from (FILL IN) emerged to share the progress of advancing AI together. This demonstrated that real developer cycles are being spent today on optimizing on AMD Instinct GPU-based infrastructure, and there is a groundswell of activation in this space. With the announcement of a new developer cloud and free access to all developer attendees, AMD is providing the access and tools required to support community advancement. While CUDA has a tremendous lead in this space, AMD is taking the right steps to at least get developers to take their tools and platforms for a test drive, and while doing so gain traction with developer loyalty.
With AI becoming a central aspect of geo-political policy, and as nation states race for AI sovereignty and supremacy, Lisa shared how important AI advancement for all was to realize the true vision of this historic technology. Tariq Amin, CEO of Humain, a leading AI operator in Saudi Arabia, came onstage to share his vision for AI advancement in the kingdom, noting that Saudi Arabia was a young nation full of innovation potential. The collaboration announced earlier this spring will bring AI platforms featuring AMD Instinct GPUs, EPYC CPUs, Pensando DPUs, RyzenAI and ROCm software to Humain data centers to deliver 500 megawatts of compute capacity over the next 5 years. This reflects a joint commitment to democratize AI access and foster innovation through scalable compute.
One thing that was striking about this event was the customer-centricity in everything that the AMD team designed. Through walk-ons with OpenAI CEO Sam Altman, Meta, Oracle Cloud, Cohere and more, Lisa and team continually discussed how deep collaborations with customers directly fuel design targets for AMD silicon advancement. This laser focus played out in the progress of collaboration advancement from science project sized deployments of first gen Instinct GPU clusters to scale deployments running significant customer workloads today. It also reflects in the open innovation focus with a notion that many voices, centered on customer requirements, will define ultimate AI advancement – and underscores the danger of so much global innovation in the hands of a single company.
Altman's appearance was particularly notable as he announced that OpenAI will be using AMD's upcoming MI400 chips, telling the audience, “It's gonna be an amazing thing” – a significant endorsement that lends considerable credibility to AMD's ambitious 10x performance improvement targets for the MI400.
So what’s the TechArena take? While NVIDIA certainly holds pole position on both market share and AI zeitgeist, the industry is collectively hungry for alternatives and worried that a single vendor will squeeze industry opportunity from this important moment. Frankly, the TAM for AI accelerators is too vast not to expect competitive alternatives to earn a segment of upcoming deployments, and AMD has put together the portfolio and partnerships to be that leading competitor. What’s striking to me is the advancement that the company is delivering gen-over-gen to bridge the gap and deliver a credible alternative to market. Investments and leadership in UAL and Ultra Ethernet will pay off as InfiniBand, a proprietary solution that was developed decades ago, is showing some tarnish.
I want to see more from ROCm software advancement and developer traction to get true solution parity with CUDA-fueled solutions, and AMD has work to deepen relationships with this essential element of the ecosystem. But even given that, the advancement is truly eye-opening, welcome, and inspiring us to anticipate what comes next. Well done, AMD.

Functional safety, as described by the ISO 26262 specification, and covered in multiple blog posts, is something that has only recently been supported in low-power double data rate (LPDDR) 5 automotive memory. It is probably safe to assume that going forward, functional safety will also be supported for storage devices. Make no mistake, both memory and storage devices are complex devices and have many different safety elements. Without safety mechanisms to detect and flag a failure in these elements, unpredictable and perhaps catastrophic results can occur.
A simple example of an otherwise undetectable error would be the case of a failed address decoder. While the host device believes that it is either reading from or writing to a specific memory location, a failed address decoder can lead to extreme data corruption, resulting in unpredictable system-level behavior because of writing or retrieving data from the wrong location. A safety mechanism that can detect an addressing failure and provide a failure flag allows for the system to take the appropriate action, ranging from disengaging the advanced driver assistance system (ADAS) to deliberately crippling the vehicle. The point, again, is that the adoption of state-of-the-art technologies is being driven by the automotive industry.
A relative newcomer to the memory market, HBM (high bandwidth memory), is also finding its way into the automobile as multi-modal generative AI is actively being employed to implement context-aware navigation. This class of navigation takes ADAS well beyond recognizing a street sign, pedestrian, cyclist, etc., while addressing basics such as lane keeping or auto emergency braking.
Context-aware navigation relies upon a class of neural networks referred to as large language models (LLM), which demand extreme levels of compute performance. Ultimately, through the real-time understanding of the environment, more intelligent driving decisions and behaviors can be made, mimicking those of the human driver. Examples of this include pulling over to the side of the road when an emergency vehicle is approaching with lights and siren engaged or cautiously entering an intersection or roadway when there is a lot of traffic, pedestrian or otherwise. When I was first learning how to drive, this was referred to as “defensive driving,” which basically is all about anticipating how a scenario might unfold and either acting or being prepared to act in case that’s what’s required to avoid an accident.
Multi-modal generative AI refers to the fact that, in addition to supporting text data sets, the LLM can also support other input data sources – most notably, video and even audio. Currently LLMs are all the rage as they can predict the next word in a sentence with reasonable degrees of accuracy – a concept that we are all becoming familiar with as AI continues to expand its reach into just about every facet of our lives. (As I write this, Microsoft Word is trying to predict which words I am going to type – nothing like someone telling you how to think!)
Multi-modal generative AI, when applied to ADAS, can predict possible scenarios and act accordingly – replicating the equivalent of “defensive driving.” Equally as important, ADAS that employs generative AI communicates directly to the driver exactly which actions are going to be taken and the rationale for those actions. This extended communication leads to increased driver and passenger confidence in the operation of the ADAS system.
With an appreciation of what multi-modal generative AI brings to the table, it should become apparent that there is an extreme amount of compute performance required to address context-aware navigation. For the past several decades, the bottleneck for compute performance has been, and continues to be, memory bandwidth, not the performance of the CPU or the AI offload engine. This has given relatively recent rise to the introduction of HBM, which is an “in-package” memory solution. “In package” means that these devices are not available in discrete packages and need to be tightly integrated into a common single package alongside the AI or CPU compute engine.
The latest-generation HBM 3E offers up to 1 terabyte per second of memory bandwidth that is derived from 1024 I/O pins that operate at multi-gigabit signaling rates. Generative AI and LLMs are hot, driving oversubscribed, insatiable demand for HBM to the point where it has been publicly stated that all HBM capacity has been sold out through 2025. Here again, the use of LLMs in the auto is driving the use of state-of-the-art, oversubscribed, HBM.
Not only has the automotive industry progressed to the point where it is now seen as the main driver of memory and storage, but the importance of these technologies has also moved from being in the back seat of the car to the front seat in terms of their importance in realizing the vehicle of today and the future.

A recurring theme I have raised over the past dozen blogs is that the automobile has transformed considerably over the course of the past few decades, from employing mature semiconductor technologies to the current state where the automotive market not only employs state-of-the-art technologies, but now drives development for semiconductor technologies.
This point is well illustrated by the memory and storage industry. Historically, the memory industry recognized the personal computer, then networking and communications applications, and finally smart phones as driving innovation. However, at last year’s JEDEC meeting (a forum to ensure alignment regarding memory standards to guarantee interoperability), representatives from the major memory and storage companies agreed that automotive applications have taken a front seat in terms of importance as a lead technology driver.
It’s no wonder why: today’s automobile employs leading-edge semiconductor devices and electrical/electronic (E/E) architectures, with some of the highest compute performance across the board. And it does so in an environment that is more stringent and unforgiving than that of the data center or smartphone. For any semiconductor device to be given practical consideration in an automotive application, there is a litany of quality specifications (AEC Q100, TS 16949, etc.) that must be supported in addition to guaranteed operation at extended temperatures that range from -40° C up to 125° C depending on the physical location of the semiconductor device. Memory and storage devices are no exception to these requirements.
Automotive is affecting all memory and storage types. In memory, the influence ranges from state-of-the-art HBM (high bandwidth memory), which is heavily embraced in the data center for generative AI, to low-power double data rate (LPDDR) and double data rate (DDR) memories; in storage devices, device types affected include universal flash storage (UFS) and solid-state drives (SSDs). These acronyms reflect the potpourri of different memory and storage technologies in the market: all of them can be found in today’s or tomorrow’s automobile. I’ll discuss many examples below and in a second blog post shortly to follow.
As the automotive industry accelerates towards the broad deployment of the software defined vehicle (SDV), there is an explosive growth in the number of lines of software used to control the vehicle. Today’s high-end vehicle contains well over 100 million lines of code, which is expected to grow to 1 billion lines of code by 2030. This alone is driving large storage requirements in the vehicle. Writing to semiconductor-based storage (flash) is a destructive process with a limited number of write cycles before a given re-written region of storage needs to be retired. The technique of tracking total storage write cycles and reallocating the written data to “fresh storage” is referred to as wear leveling.
Wear leveling ensures that the stored data is not lost due to writing to a storage cell that no longer supports data retention. The expired storage cells are retired and managed via an internal map created by the internal storage controller that tracks available vs. retired storage locations based on tracking write cycles to a given cell. With such a large code footprint, reaching up to 1 billion lines of code, it is not unreasonable to expect very frequent over the air (OTA) updates of that code over the 10-year lifespan of the vehicle. This leads to the need for a significantly larger storage footprint to accommodate regular updates or write cycles given the fixed number of write cycles that flash storage can support. Additionally, there is a need for at least twice the required storage to accommodate roll-back in case an OTA update was unsuccessful.
Suffice to say, security is of paramount importance, leading to the introduction of the ISO 21434 standard, which was expressly defined to address cyber security for automotive applications. This standard focuses on design methodologies used to design, test, and verify the design of memory and storage to guard against malicious attack. SDVs are designed expressly with OTA in mind, so state-of-the art security is paramount. An emerging branch of cybersecurity referred to as post-quantum computing (PQC) threatens to disrupt the current security landscape in just a few short years. This too will need to be given careful consideration to avoid malicious attacks and will again require state-of-the-art technologies in the vehicle down the road.
Continuing to look at the trajectory of automotive semiconductor storage technologies, because the SDV is driving a centralized architecture, the aggregation of all storage in one singular location leads to a more efficient means to support OTA updates in addition to increased overall board area and efficiency. An example of such efficiency that can be realized through centralized storage is illustrated by sharing such maps between advanced driver assistance systems (ADAS) and in-vehicle infotainment (IVI). ADAS uses map data to navigate the roadways to get to the desired location, whereas the IVI system uses map data to display to the driver and passenger exactly where the vehicle is on the given roadway or to enter the desired location.
Centralized storage devices such as single root I/O virtualized (SR-IOV) SSDs allow for access to a common storage area from multiple different sources while providing hardware isolation, preventing user-downloaded applications from affecting mission critical code. Here again, SR-IOV represents some of the state of the art in storage technologies that are being employed in today’s automobile.
In part two, I’ll discuss how two more use cases—functional safety and context-aware navigation powered by multi-modal generative AI—are driving further innovation in memory and storage technologies.

The infrastructure to support AI workloads is evolving as rapidly as AI workloads are growing.
In a strategic partnership announced today, WEKA and Nebius are meeting that challenge head-on – delivering a GPU-as-a-Service (GPUaaS) platform that brings ultra-high performance, scalability, and simplicity to the AI infrastructure market.
The solution integrates WEKA’s AI-native data platform with Nebius’ full-stack AI cloud, offering customers an infrastructure backbone purpose-built to handle the unique demands of AI model training and inference at scale. The partnership gives insight into how the next generation of AI cloud infrastructure will look.
Enterprises training cutting-edge models often face infrastructure constraints in four areas: compute, memory, storage, and data management. These friction points stall innovation and increase time to value. The WEKA-Nebius collaboration addresses these limitations by delivering a cloud-native solution with microsecond latency, high throughput, and seamless scalability from petabytes to exabytes of data.
At the heart of the solution is Nebius’ GPU-rich AI Cloud, a purpose-built platform designed from the ground up for AI/ML workloads. Nebius blends proprietary cloud software, in-house hardware design, and developer-first tooling to deliver a streamlined environment for model builders, from startups to research institutions.
To fuel its premium tier, Nebius selected WEKA’s data platform, citing its consistent performance across mixed I/O workloads, robust metadata handling, and multitenancy capabilities – must-haves for large-scale AI environments.
“WEKA exceeded every expectation and requirement we had,” said Danila Shtan, CTO at Nebius. “It delivers outstanding throughput, IOPS, and low latency while managing mixed read/write workloads at scale.”
One of the first deployments of this integrated solution is already in action at a leading research institution. The organization selected Nebius to power its large-scale experimentation and AI model development and brought in WEKA to meet storage performance and manageability needs. The result? A multi-thousand-GPU cluster backed by 2PB of WEKA storage – delivering a fully managed, high-performance environment tailored for rigorous AI research.
Key features like user and directory quotas were critical in customizing the platform to the institution’s operational demands. And by pairing Nebius’ scalable compute with WEKA’s ultra-fast storage layer, the deployment ensures minimal bottlenecks and maximum utilization, accelerating time to insights.
This partnership exemplifies a key trend in enterprise AI: the rise of neoclouds. Unlike general-purpose hyperscalers, neocloud providers like Nebius offer tailored platforms for AI development, focusing on performance, control, and flexibility. These environments are quickly becoming the go-to solution for enterprises that want to move fast without compromising on power.
Meanwhile, WEKA continues to cement its position as the high-performance storage layer for AI, enabling faster training and smarter infrastructure utilization. In environments where every millisecond counts, the ability to reduce latency, improve GPU utilization, and eliminate data silos can be the difference between leadership and lag.
“Together, Nebius and WEKA are redefining what's possible when high-performance storage meets AI-first infrastructure,” said Liran Zvibel, WEKA CEO. “It’s a unified solution that is a catalyst for enterprise AI and agentic AI innovation.”
The WEKA-Nebius solution is a compelling model for what’s next: AI-native infrastructure as a service, where every layer of the stack is designed to accelerate AI.
Learn how Nebius and WEKA are powering next-gen AI infrastructure.

On Monday, Qualcomm announced that it is acquiring Alphawave Semi (formally Alphawave IP Group plc) for $2.4 billion, or 183 pence per share. According to Qualcomm’s announcement, the move will “further accelerate, and provide key assets for, Qualcomm’s expansion into data centers” by complementing the company’s Qualcomm Oryon central processing unit and Hexagon neural processing unit.
Alphawave Semi was founded in 2017. The company became known for its high-performance connectivity and compute solutions, including customizable chiplets, which we discussed with Vice President of Product Marketing and Management Letizia Giuliano in an interview last fall:
“There is no one solution, no one size fits at all,” she said. “All our customers and our systems we're building today need to be tailored to the particular workload, the particular place in the data center they are, and that transfers to the hardware that we are designing.”
Qualcomm cited the value of custom silicon as part of its announcement. “Alphawave Semi has developed leading high-speed wired connectivity and compute technologies that are complementary to our power-efficient CPU and NPU cores,” said Cristiano Amon, president and CEO of Qualcomm Incorporated. “Qualcomm’s advanced custom processors are a natural fit for data center workloads. The combined teams share the goal of building advanced technology solutions and enabling next-level connected computing performance across a wide array of high growth areas, including data center infrastructure.”
The deal is subject to standard regulatory approvals and is expected to be completed in the first quarter of 2026.
Monday’s news comes hot on the heels of last week’s announcement that AMD is acquiring the engineering staff of Untether AI. With two acquisitions of smaller industry players in such a short time, we at TechArena are asking if this is the start of an AI silicon acquisition feeding frenzy in the tech waters.
When Alphawave Semi went public on the London Stock Exchange in May 2021, it was for a value of £3.1 billion (410 pence per share). Its sales price to Qualcomm is less than half that, and that’s after a significant bump in valuation after word of Qualcomm’s interest leaked in April. With sky-rocketing demand and international uncertainty caused by US-led tariff wars, it’s clearly a challenging time to be a smaller player in the industry. Qualcomm itself is looking to boost its offerings to break into data centers and earn a greater share of the growing market for AI silicon as it competes against industry powerhouses like NVIDIA and AMD.
We’ll be watching closely for more signs of consolidation in the industry as an already wild year continues to unfold.

Last week, AMD announced two back-to-back acquisitions intended to strengthen its position in the AI market: first, AI software optimization startup Brium, and second, the engineering employees of AI inference chip developer Untether AI. The pair of disclosures came as the company is preparing for Advancing AI,an event later this week at which the company plans to unveil a 4-minute AI video and to showcase how AMD is advancing AI across industries.
With the cluster of events, we at TechArena decided to take a closer look at the acquisitions and what they mean for AMD’s AI strategy.
Brium: Optimization Software for the Inference Stack
AMD announced the acquisition of Brium on June 4 in a blog post by Anush Elangovan, corporate VP of software development. Elangovan hailed the start-up’s “team of world-class compiler and AI software experts with deep expertise in machine learning, AI inference, and performance optimization.” He added that this technological expertise will “play a key role” in enhancing AMD’s AIplatform.
The Brium team will play that role both through the software they bring to AMD and through the contributions of the new employees. Elangovan called out Brium’s unique ability to optimize the inference stack before a model reaches the hardware. He also named specific key projects that the Brium team will start contributing to in order to enable “faster, more efficient execution of AI models on AMD Instinct GPUs.”
Untether AI: Efficient AI Inference Chip Engineering Expertise
Untether AI, which was featured on our In the Arena podcast in January, was tackling inference efficiency from the hardware direction by focusing on AI accelerator products and a software development kit. As Untether AI’s Bob Beachler, said:
“We were really founded to solve inference compute in AI. Unlike training, which gets a lot of the press and heat right now, we know that inference is going to be a much larger marketplace because it’s going to run 24-7, 365.... [So we] really focused on how do you run AI inference as energy efficiently as possible.”
Untether’s solution was a novel “at-memory” compute architecture designed to minimize data movement and maximize compute performance. The company combined this with software optimization to further improve performance.
Per the terms of the sale that have been disclosed, AMD is acquiring and hiring Untether AI’s hardware and software engineers, but not the company as a whole. Untether AI will no longer sell or support its products because of the sale.
AMD Releases Cinematic AI Video Powered by AMD Instinct™ MI325X
Adding to the buzz, AMD this week dropped a cinematic, AI-generated, 4-minute video that spotlights innovation and was made using Instinct MI325X GPUs. Created in partnership with Higgsfield AI and TensorWave, the video features a brave developer who follows her intuition and discovers AMD ROCm, breaking it free and delivering the open software platform to developers everywhere – masterfully pulling through the company’s perpetual key message: Together we advance. With beautiful “cinematography” that feels reminiscent of Ready Player One, the video is expected to be showcased at the Advance AI event this week.
The TechArena Take
As a company that is continually experimenting with and adopting the latest AI image and video generation tools, our hats are off to AMD, Higgsfield AI and TensorWave for taking on the ambitious video project and knocking it out of the park. The gauntlet has been laid down.
With the two acquisitions by AMD focusing on inference efficiency, it’s clear that, in 2025, attention is turning to resource management with model deployment. As more real-world implementations take root, companies are considering the effects of power-hungry GPUs in inferencing workloads and looking at how those can be minimized.
AMD’s purchases look like a move to get ahead of escalating energy use associated with increasing inferencing workloads running from the data center to the edge. The dual acquisitions give them multiple options: increasing the efficiency of their GPU line up via software, or getting a jump start on new architectures for AI accelerators.
According to a recent LinkedIn post by Hugging Face product + growth manager Jeff Boudier, the open space community got early access to AMD’s new MI355X GPUs, and is impressed with what they are seeing. Hugging Face is currently running over80,000 tests on the new GPUs, which are manufactured with TSMC's 3nm node and built with AMD CDNA 4 architecture.
We’ll be following this week’s conference and related news closely.

OpsRamp, an HPE company, claimed their turn in the spotlight during Cloud Field Day to demo their impressive unified and AI-powered IT operations management platform. The software-as-a-service solution addresses one of the most persistent challenges in modern IT operations: the complexity of managing multiple environments, tools, and data sources scattered across on-premises and cloud infrastructures.
OpsRamp’s foundation is based on a three-pronged strategy:
First, the platform provides unified observability by consolidating all data from applications, servers, network devices, and cloud environments into a single tool.
Second, it offers AI-powered analytics to help operators understand what’s happening across their infrastructure so they can prevent and solve problems.
Third, it includes intelligent automation for corrective actions when issues are detected, potentially reducing resolution times from days to hours.
The architecture’s flexibility stood out as particularly practical. OpsRamp offers both agent-based and agentless monitoring approaches, with their lightweight Go-based agent consuming minimal resources. The platform integrates with over 3,000 third-party tools and can act as a “monitor of monitors,” ingesting alerts from existing solutions via webhooks. Once the alerts are ingested, OpsRamp can apply its AI-powered analytics to do alert correlation. This coexistence capability addresses the reality that most organizations can’t just replace their entire monitoring stack overnight.
From a business perspective, the subscription-based licensing model seems straightforward, charging per monitored resource (e.g., a server, network device, wireless access point, or even a public cloud resource) with different ratios for different device types: a server counts as one resource, while four wireless access points count as one resource. The platform includes up to 50 metric series per resource with 12-month data retention for metrics by default.
What impressed me during the live demonstration was the platform’s alert correlation capabilities. The team showed how OpsRamp’s machine learning can identify cascading failures—like when a single network switch port failure triggers multiple alerts across virtualization layers, databases, and applications. Instead of overwhelming operators with many individual alerts that require manual correlation, the platform creates what OpsRamp calls “inference alerts” that group related incidents together and identify the probable root cause.
OpsRamp represents a holistic response to the persistent problem of IT operations complexity. The presenters made a compelling case for moving beyond the siloed tool approach where organizations may have separate platforms for network, storage, compute, database, and cloud monitoring. The combination of comprehensive observability, intelligent correlation, and governed automation creates a compelling value proposition. The platform’s ability to work alongside existing tools rather than requiring wholesale replacement makes it particularly attractive for enterprise environments with significant legacy investments.
The alert correlation capabilities using machine learning could genuinely transform how operations teams handle incidents. For organizations struggling with alert fatigue and the operational overhead of maintaining multiple monitoring silos and correlating the data to trouble-shoot issues, OpsRamp offers a remarkable solution.

MinIO brought Cloud Field Day 23 to a strong finish with a presentation on their flagship enterprise offering, AIStor, highlighting the critical shift toward object-native storage for AI and analytics workloads.
Object storage is critical to today’s AI and analytics landscape. Every major large language model (LLM) was built using object storage, and all data lakehouse analytics tools have become object-native by design. This isn’t surprising since cloud providers like AWS S3 are inherently object-native, but it presents challenges for on-premises deployments.
The core problem MinIO addresses is the architectural limitations of traditional retrofit object gateway solutions. These systems stack multiple layers, including a gateway layer for translation, a metadata database, a storage area network or network attached storage (SAN/NAS) backend, and a storage network. These layers create performance bottlenecks, data consistency issues, and scale limitations.
MinIO’s AIStor takes a fundamentally different approach with a gateway-free, stateless architecture using direct-attached storage. This eliminates the translation layer and central metadata databases, instead writing metadata atomically with the actual data; a deterministic hashing approach actually avoids the need to have a central database at all. The result is guaranteed read-after-write and list-after-write consistency at massive scale, something impossible with traditional approaches.
A key differentiator for AIStor is its truly software-defined nature, meaning it can run on any industry-standard hardware. Unlike storage solutions that require specific appliances, MinIO works across the spectrum from small deployments on a Raspberry Pi to massive 1,000+ node clusters. The company recommends minimum system requirements for serious production environments, and recommends NVMe drives over hard drives, but runs off any industry off-the-shelf hardware.
Real-world deployments demonstrate AIStor’s capabilities. A large autonomous vehicle manufacturer runs 1.35 exabytes on AIStor, when previous platforms failed at 20 to 50 petabytes (PB). A leading cybersecurity company is running about 1.25 PB on AIStor, having repatriated data out of AWS, and the transition helped improve their gross margin around 2 to 3%. Finally, a fintech payments provider serving nearly half a billion merchants and processing billions of small files is currently running 30 PB with a plan to scale to 50; they’re meeting strict service level agreements that their previous appliance-based solution couldn’t handle.
MinIO AIStor represents a powerful solution for enterprises serious about AI and analytics at scale. The object-native architecture addresses fundamental limitations of traditional “unified” solutions, while real-world deployments prove the technology works at exabyte scale. For organizations moving beyond test environments into production AI workloads, AIStor is a smart choice for a solid foundation that can grow from petabytes to exabytes without architectural rewrites.

Qumulo’s presentation at Cloud Field Day 23 highlighted their innovative approach to unstructured data storage across hybrid cloud environments. The company, known for their advanced file system technology that enables unified global data management across data centers and clouds, has built a customer base spanning entertainment, healthcare, pharmaceuticals, and government sectors during their 13 years in business.
The core of Qumulo’s offering is their cloud data fabric, designed to unify data across any hardware platform and cloud environment. The company’s solution works with unstructured data, which is 80% of what feeds enterprise systems and includes film/video, images, telemetry, game builds, mapping/GIS, and more. By supporting data on any hardware and any cloud, it gives customers infrastructure freedom of choice and allows them to choose the configuration that is best for them.
Central to Qumulo's approach is building a strictly consistent, very correct, and very durable file system that extends authentication and authorization to surrounding systems. This ensures data is available to authorized users in its most accurate and consistent form, which is critically important for AI applications that depend on current, correct data. The system can treat distributed data pools as a single file system while maintaining local performance characteristics, addressing pain points around data gravity and geographic distribution.
I was blown away by Qumulo’s customer testimonials as their value proposition came to life. The company believes they are in 10 of the world's 10 largest video production houses, with many recent major releases having some connection to their infrastructure. In pharmaceuticals, they support 10 of the 10 largest recipients of grants from the National Institutes of Health and National Science Foundation. They also handle critical applications like wildfire modeling for Southern California firefighters and real-time crime center operations.
What impressed me from the talk was Qumulo’s focus on solving real operational challenges. Their cloud data fabric addresses pain points around data gravity, geographic distribution, and strict consistency across hybrid environments. The ability to treat distributed data pools as a single file system while maintaining local performance is technically impressive and practically valuable.
Customer stories demonstrate clear ROI, from helping mortgage underwriters improve productivity from 3 to 14 applications daily with AI processing technologies to enabling global visual effects collaboration without traditional data movement constraints. The combination of technical innovation with real-world problem-solving makes Qumulo compelling for enterprises dealing with large-scale unstructured data challenges.

The HPE Greenlake team and SAP team joined together at Cloud Field Day 23 to kick off proceedings on June 5, and the topic was SAP ERP. SAP HANA, first delivered in 2010, provides enterprises an in-memory database that fuels the S/4 HANA ERP solution. S/4 has had mixed results in the market, driving SAP to introduce SAP Cloud ERP, a platform as a service offering, as an alternative to on-prem packaged deployment.
The HPE team delivered a message that SAP’s evolution of this solution in this marketplace has challenged customers, requiring too much change and too rigid of solutions to get traction with the solution in market. Yet, early adopter key learnings represent a new approach to the market enabled by Greenlake. This will enable enterprises to run this solution in their own data centers (managed by HPE) and navigate end of life of the traditional ERP solution coming in 2027. The solution also provides a path to multi-tenancy and public cloud adoption.
How does this solution come together? Private cloud ERP will be delivered with SAP support. The cloud ERP provides a foundation for layering on ecosystem solutions and Joule agents to drive functions like finance, spend, supply chain HR, and customer service. The team has packaged this as software plus infrastructure management and services, cloud services, and upgrade support into one integrated solution.
The collaboration provides a path to simplified adoption for enterprises, and we expect SAP and HPE are banking on this single solution to help accelerate adoption of what has been a clunky solution. The public cloud ERP is touted to be simpler to use, and integration of Joule agents are touted as productivity drivers for across the enterprise. With ecosystem solution integration, the team has provided flexibility for deploying solutions that are tuned for unique enterprise environments with a common ERP foundation. This path has a lot of value for large organizations that don’t want the complexity of managing an on-prem implementation but still want control of their most precious data within a managed private cloud.
SAP is a complex and powerful solution, and kudos to the SAP team for recognizing the complexity of traditional deployment models. HPE understands enterprise-class requirements for data security and control and makes a solid partner choice for this delivery. While I would like to have heard more in the presentation on why customers benefit from this SAP RISE solution vs. competitive offerings in what still feels like a very heavy lift, there’s no question that enterprise has relied on SAP and will continue to do so with this modernized solution.

Amber Huffman and Jeff Andersen of Google join Allyson Klein to discuss the roadmap for OCP LOCK, post-quantum security, and how open ecosystems accelerate hardware trust and vendor adoption.

Scality brought the heat at Cloud Field Day 23, telling their story through the lens of customer success. The company, which has been working in the storage software arena for the last 15 years, discussed the latest with Scality Ring. Their history started with the rise of the cloud, responding to service providers needing a storage foundation extending into enterprise on-prem cloud storage solution delivery.
With this foundation, Scality began pivoting to AI data lakes a few years ago, adding integration of security and data aggregation, feeding the initial phases of data cleaning before being utilized for AI training. They differentiate with extreme scaling, supporting 1 to 1000s of apps, 100 petabytes (PB) per failure domain, 100s of billions of objects, and more.
We walked through a number of use cases, starting with Splunk. Ben Morge walked through a case study of a US-based bank with a 30-40 PB Splunk data store across multiple sites and 1-year data retention. Ben described the configuration featuring SSD-based hot-tier data connected to Ring warm-tier backup. Data is replicated across sites tapping HPE Apollo servers. The complexities of this configuration involved mitigating migration ingest overload and unexpectedly high production GET traffic, addressed by increasing server capacity to maintain a 75 GB/GET throughput per site. The result? Tiering with this approach provided a resilient multi-site storage solution with the required retention policy.
Next up was Nick Sayer, describing a challenge tackled for CNES, a multi-PB satellite imaging storage with data retention of new imagery in hot tier for 6 months along with multi-hundred PBs of cold-tier image data. The customer had a hodgepodge of storage environments and wanted to convert multiple formats to S3 in a cost- and energy-efficient environment.
The solution? A hot/warm tier Ring across three data centers in a single namespace S3 data lake coupled with a cold-tier tape solution. Any user query seeking data on tape is prompted to restore from cold tier on a temporary basis to utilize the data before it’s returned to its tape home. The multi-site tape solution stretched Ring capabilities, requiring support across multiple public clouds and tape providers and requiring a unique API development by the Scality team to deliver the resilience required for tape solutions.
Next up was Aurelien Gelbart providing insights on a French bank deployment integrating multiple storage solutions for internal data with an aim of lower price per TB with a bill-back solution to business groups for usage. The challenge was moving over 2,000 applications’ data to be integrated into one Ring platform, including database backup, financial data, data lake, DMS, artifacts and media. The solution required rapid scale, growing from 1 to 50 PB over seven years. Scality tackled this solution with software improvements that handled this complex environment as well as architectural improvements that allowed server flexibility. The solution today supports 100 PB of storage and over 300 billion objects and 1 billion client operations per day, offering incredible scale and resiliency for the bank…all with a cost reduction of 75%, primarily driven by consolidation of systems across business groups.
With these customers and more relying on Scality’s scale, flexibility, and engineering know-how, it’s no wonder that global brands rely on the software for storage integration. We like the way the team shared how they work with customers to solve unique challenges, speaking to the deep level of collaboration required for these large-scale deployments. We see value in how they tackle unique opportunities like tapping cold-tier tape and mitigating the complexity with collaborations and custom API development. We see no end of demand as more companies seek AI data lakes to fuel broad scale AI deployments. In other words…we expect to see more fantastic customer deployment stories from Scality in the months ahead.

DALLAS – June 4, 2025 — Several Palo Alto Networks executives took the stage for the company’s Ignite on Tour event in Dallas, delivering the clear message that complexity is the enemy of security, and AI is both the threat and the answer.
In a morning packed with insightful commentary, real-time attack simulations, and partner insights from Sabre, CDW, and others, the Palo Alto Networks team made the case for a new security architecture built from a unified, intelligent platform.
Kumar Ramachandran, president of network security at Palo Alto Networks, opened with a bold analogy: today’s AI inflection point mirrors the petroleum-fueled transformation of the 1900s. As in the dot-com era, he noted, companies faced a decision – lead the change or be changed by it.
Kumar described a seismic shift in the nature of cyberattacks, saying the time period between reconnaissance and impact is now greatly truncated.
“The time period has shrunk from what used to be weeks, if not multiple weeks, to a few small hours,” he said. “The much larger percentage of attacks feel like a zero-day attack.”
With attackers using LLMs for reconnaissance, phishing, and vulnerability discovery, traditional defenses are crumbling under speed and volume, he said.
Palo Alto Networks’ prescription? A fully integrated security platform that fulfills what he called the “three C’s” of modern data: complete, consistent, and correct. In a world where the average enterprise juggles 83 security tools across 30 vendors, that level of data quality isn’t achievable through manual integration.
The company outlined its three major platform pillars:
The unifying thread is automation powered by AI – not just to detect threats, but to understand context and reduce human overhead across environments.
AI Is Transforming Work – and Creating Risk
Anupam Upadhyaya, SVP of SASE products, demonstrated how AI is both a productivity accelerant and a security nightmare. At Palo Alto Networks, internal developer productivity has increased by 20–30% thanks to AI copilots and code assistants, he said.
But these tools create new vulnerabilities. Anupam showcased AI Access Security, a tool that gives organizations visibility into which generative AI tools employees are using – including shadow AI – and how those tools are handling sensitive data. The platform allows teams to classify, tolerate, or block AI apps with contextual policy enforcement.
Also unveiled was Prisma AI Runtime Security, providing full-stack protection from agents to models to data sets. The demo showed real-time revocation of malicious model permissions and red teaming tools that simulate agentic AI attacks before they happen.
Scott Moser, senior vice president and CISO of Sabre Corporation, shared a compelling journey from 28 security incidents per year to near real-time remediation. After consolidating from four endpoint tools to one – Palo Alto’s XDR – Sabre reduced mean time to containment from days to hours, with 60% of alerts now fully automated.
He emphasized the importance of trust and partnership, citing Palo Alto’s support during previous incidents as the differentiator.
“They didn’t just sell us software,” he said. “They showed up.”
The most riveting moment came from Unit 42’s Carl Bryant, who walked the audience through a red-teamed, AI-driven attack modeled on MITRE ATT&CK. What once took days – recon, privilege escalation, payload delivery – could now take minutes using agentic AI.
His warning to retailers was blunt: “You’re in the bullseye for the rest of the year.”
With GenAI tools being abused by threat actors in China, North Korea, Iran, and Russia, Carl emphasized that organizations need AI-powered defenses simply to survive.
The TechArena Take
Palo Alto Networks didn’t just pitch a product suite in Dallas – they delivered a clear and urgent thesis: cybersecurity must move as fast as attackers do, and the only viable response is platformization – as they call it – anchored in AI and automation.
The implication? If you’re still piecing together best-of-breed tools from a dozen vendors, you’re not solving security; you’re maintaining a jigsaw puzzle that’s missing critical pieces. From cloud runtime protection to AI access control to unified SOCs, Palo Alto is betting that convergence, not complexity, will define the next decade of enterprise security.

Hunter Golden of OnLogic joined Allyson Klein for a candid conversation on scaling edge infrastructure, avoiding over-spec'ing, and right-sizing hardware for evolving AI workloads.
After the video check out the Edge Server Selection Checklist here.

The arena of tech promises constant change, heady innovation that propels us forward, new entrants delivering solutions that were beyond imagination a few months ago, and of course, era-ending transitions when companies that were foundational pillars somehow collapse. We’ve seen and covered this last narrative as it’s played out over the last few years with Intel, the once North Star of compute platform definition. Now, a new behemoth has emerged that may be following in Intel’s unforced error footsteps – VMware.
VMware…the inventors of modern virtualization. VMware…the foundation of the private cloud. VMware…every IT manager’s best friend. That VMware.
We knew that when Broadcom acquired VMware, we’d see a transformation of the business. Hock Tan certainly carries a reputation with him of financial success and efficient operations. Many wondered how this approach, effective in hardware component delivery, would fit with v-admins. As the story has evolved, we’ve seen business decisions that have disrupted the long-held trust that enterprise IT has had in VMware for decades. Changes in licensing agreements, ramps in core-licensing minimums, and more have rolled out from new leadership, and with it a growing sense of uncertainty whether VMware can be trusted as a foundation of private cloud moving forward.
At TechArena, we’ve been watching this story evolve and wanted to check in on enterprise sentiment. We conducted a survey of IT operators at Dell Tech World, a terrific opportunity to get to the heart of what enterprises are thinking about data center computing, to see how IT organizations were viewing this landscape, and, more importantly, what they planned to do about it. What we found was eye-opening, even given our suspicions that VMware’s rock-solid hold on enterprise was wavering.
IT respondents reflected that migration is absolutely a priority in many organizations, with 10% of respondents having already migrated off of VMware solutions and a whopping 28% currently planning a migration. Given that 19% of respondents claimed that they weren’t VMware users, this reflects almost half of those using VMware historically within some state of migration.

So what will drive IT destinations? In looking at top feature priorities for IT deployments, two items bubble to the top: 31% of respondents signaled that support for a wide selection of IT infrastructure was a top criterion, and an additional 25% tapped integration of cloud-native features as essential. While migration is very much a move from a platform, the move to what’s up next will be driven by full features that will support modern private clouds. This provides some insight into how IT organizations dislike anything that feels like lock-in and want options for modern integration of features like containers that will propel the advancement of IT operations and provide new inroads for adoption of new classes of applications like enterprise’s expected ramp of AI.

So what’s the TechArena take? As Keith Townsend recently quipped, “there’s nothing wrong with VMware”, and from a narrow view of technology capability, he’s absolutely correct. What’s disrupting this industry stalwart is a customer orientation that is out of pace with enterprise expectations, opening up the door for others in the industry like Microsoft, Nutanix, Platform9, and Red Hat to gain market share and customer loyalty. I’d expect the next two years to show a rapid advancement of active migrations and equally importantly modernization of enterprise clouds. We’ll be watching this space acutely for signs of the next major industry leader in the private cloud domain to take form. Watch for more TechArena coverage on all things cloud this week as I’ll be reporting from Cloud Field Day. Can’t wait!