Explore the cutting edge of computing from data center to edge including solutions unlocking the AI pipeline, all backed by Solidigm's leading SSD portfolio.

Just two years ago, most companies were simply asking what AI could do in an enterprise setting. In 2026, they are asking a harder question: how to scale without breaking their reliability or their budget. That shift from curiosity to capacity is where Isayah Young-Burke, go-to-market strategist at IONOS, spends most of his time.
In a recent TechArena Data Insights episode, I sat down with Isayah and Solidigm’s Jeniece Wnorowski to explore why security and access risks are the underexamined obstacle in enterprise AI, how data sovereignty is reshaping infrastructure decisions on both sides of the Atlantic, and why storage is now one of the most strategic layers in an AI-ready stack.
IONOS, part of the publicly traded IONOS Group with more than 6.6 million customer contracts globally, occupies a distinctive position in the cloud market. The company serves customers ranging from an individual registering their first domain to an enterprise running a multi-client managed service provider business. That breadth, Isayah explained, provides a kind of ground-level intelligence that shapes how the company serves customers and thinks about AI adoption.
“It's that customer service and that experience we carry behind our brand. It has to be good at every level,” he said. “AI adoption…doesn’t just start with AI. It starts with that digital footprint that grows into infrastructure. AI becomes that natural next step, just like after you get a website, you start thinking about cloud storage and cloud infrastructure. So we get to see that whole journey.”
When asked where he sees the biggest gaps as organizations operationalize AI, Isayah was direct: most enterprises are focused on the wrong thing. While model selection often dominates the discussion, choosing the “right” model is not what predicts success.
“Most AI challenges at scale — it’s not really a capability problem. It’s a system problem, not the model. And increasingly, they are a trust and access problem,” he said.
He drew on a panel discussion at IT Expo where a fellow speaker raised concerns about the level of access AI agents are granted within enterprise environments. An agent embedded in a company’s internal systems can do more than answer questions. It can write, delete and trigger workflows across an entire environment. “That’s a very different risk profile than a website chatbot,” Isayah noted.
Beyond security, he identified data readiness and workforce skill gaps as persistent obstacles. IONOS has responded by building tools like IONOS Momentum and the AI Model Hub, designed to make AI infrastructure accessible to small-to-medium businesses and public sector organizations that need practical solutions, not just raw compute.
Operating across the US and Europe gives IONOS a useful vantage point on how regulatory environments shape AI infrastructure decisions. In Europe, regulations like GDPR and initiatives like Gaia-X have made data residency a front-line concern from day one. In the US, speed and innovation tend to dominate, but that is shifting.
Isayah pointed to a dimension of US cloud law that often goes unexamined: the Cloud Act gives the US government legal authority to access data held by American cloud providers, even when that data is stored in Europe. IONOS operates under a different legal framework in Europe, because it is a subsidiary of a German company. This distinction matters significantly to companies that do business overseas.
“Knowing where your data lives and who has access to it under what conditions really matters,” he said. “Providers who can give answers to those questions have a real advantage.”
Nowhere is the infrastructure shift more visible than in storage. Isayah described storage as having “quietly become one of the most strategic layers in AI,” noting that as AI-enabled workloads scale, enterprises must manage massive volumes of unstructured data, including text, images, logs and embeddings, that traditional storage architectures were never designed to handle.
With this new challenge, he noted, there’s been a shift toward object storage. The medallion architecture approach, organizing data into bronze, silver and gold enrichment tiers, has become a common framework for managing this complexity. These practices have become the backbone for data lakes, the central repositories of where raw data lives before being processed. S3-compatible object storage has emerged as the de facto standard for these data lakes, valued for its scalability, cost efficiency and — through IONOS — API accessibility.
Looking ahead, Isayah sees agentic AI as the next major infrastructure challenge. “AI agents aren’t just generating outputs,” he said. “They’re interacting with back-end systems. They’re triggering workflows from different applications and different software, making decisions across platforms in real time.”
That shift demands decentralized architecture, low-latency edge and cloud environments, strong API interoperability and, above all, rigorous security controls. He referenced Anthropic’s recent decisions around its Mythos model, where it chose not to release the model publicly after it was tested offensively in a sandbox experiment, found system vulnerabilities and escaped the test environment as instructed, as a reminder of what is at stake.
“Without fail-safes, it’s unwise to release this into the public,” Isayah said. “The foundation for these automated systems has to be solid.”
For technology decision-makers, the practical takeaway from this conversation is straightforward: the infrastructure decisions being made now around storage architecture, data governance and agent access controls will determine the ability of organizations to scale AI later. IONOS’s position gives Isayah a grounded view of where those decisions are going well and where they are not. Organizations still treating storage as a commodity and AI security as an afterthought, may find that catching up later is considerably more expensive than getting it right now.
To learn more, listen to our full conversation in the published podcast, and read about IONOS’s cloud offerings, including the AI Model Hub, at ionos.com.

Our next TechArena Data Insights comes live from Xcelerated Compute, where Jeniece Wnorowski and I had a chance to sit down with David Moehring, general partner at Cambium Capital and former CEO of IonQ, to unpack how quantum computing fits into the broader computing landscape, and why the real story is less about breakthrough moments and more about building entire ecosystems.
David has had a long and illustrious career working in quantum across academia, government and industry. His career began in academia, where he pursued a PhD in atomic and optical physics and quantum computing before moving into applied research at Sandia National Labs. From there, he transitioned into government, funding advanced quantum computing initiatives, and became the founding CEO of IonQ before his move to Cambium Capital.
That multidisciplinary background continues to shape how he evaluates companies today. “I found it very useful to understand the motivations of all of these different parts of the ecosystem,” he said, adding that it greatly informed how he looks at investing into quantum.
Cambium Capital is an early-stage venture capital fund that invests in advanced computer hardware, and we talked about their recent moves in investing in the total cost of ownership (TCO) of AI data centres.
Rather than chasing a single breakthrough, the firm invests across the compute ecosystem, focusing on improvements to be made in different areas. “We have companies that we’re looking at as part of our portfolio that work in power delivery, movement of data, embedded memory, and packaging across the ecosystem,” he explained, “because if you move just one of them forward, everything else is still backed up.”
This systems-level thinking extends to how Cambium evaluates startups. Technical depth is essential, but so is market readiness. “You just can’t have your one good technology and toss it over the fence and think other people are going to integrate it,” he notes. “You need to be very deeply technical in your field, but you also need to really understand where it comes in.”
When it comes to quantum computing, David is clear-eyed about both its promise and its limitations. The technology is not poised to replace classical systems; it will augment them. “It’s not that if you make quantum computers then classical computers or GPUs will go away because they really solve very different problems,” he said.
Instead, quantum systems will tackle specific classes of problems that are either infeasible or inefficient for classical architectures. This mirrors the evolution of GPUs and specialized accelerators, each designed for distinct workloads.
“There are jobs that just cannot be done by classical computing,” David explained, framing quantum as another layer in an increasingly heterogeneous compute landscape.
On the relationship between AI infrastructure and quantum computing, he envisioned both evolving in parallel, solving very different problems in the long run. That being said, he noted that classical computers will still be required to control quantum computers. Beyond that, foundational innovations in materials science and lasers are likely to benefit both domains.
Cambium’s investment strategy isn’t limited to headline-grabbing processors. The firm also has strong conviction in the new quantum investment vehicle, 55 North, where David is the Board Chairman.
“There’s still a lot of hardware development that is needed, not just for the kind of the quantum processor itself, but the ecosystem,” David said.
That includes everything from laser systems for atomic qubits to cryogenic infrastructure for superconducting systems, components that rarely make headlines but are essential to scaling the technology. This mirrors Cambium’s broader philosophy: meaningful progress happens when the entire stack evolves together.
Despite growing interest, quantum computing remains widely misunderstood. Even quantum physics itself can be very counterintuitive, and many physicists struggle with some of the rules. That gap between theory and practical understanding often fuels unrealistic expectations. His best advice? Talk to the experts.
Looking ahead, David points to a specific category where quantum could first demonstrate real-world value, the field of biopharmaceuticals. “We strongly believe that’s where you’re going to see first real kind of advantage from quantum,” he predicted.
At the same time, he remained cautious about overblown claims, adding that some of the news was driven more by hype than understanding.
David built on his years of experience working on quantum in different capacities to deliver a pragmatic take on where the field is headed. Our key takeaway was that along with the big-ticket processors, investments need to focus on building the underlying infrastructure needed to keep quantum compute running, something Cambium Capital is keenly focused on. Pushing one piece of the puzzle without solving other challenges would only lead to bottlenecks being moved down the line.

Maya Kalyan’s career has always lived at the intersection of disciplines. With a background in biomedical engineering and more than a decade in life sciences, she now serves as a staff algorithms and AI engineer in the molecular diagnostics space at Thermo Fisher Scientific. In a recent TechArena Data Insights episode, Solidigm’s Jeniece Wnorowski and I heard Maya offer a practitioner’s view on where AI is genuinely delivering value in healthcare, and where significant work remains.
The starting point for any meaningful discussion of innovation in diagnostics, Maya explained, is understanding what problems the industry is actually trying to solve. She identified four primary areas of focus: accuracy and reliability, turnaround time, cost reduction, and automation.
On both cost and turnaround time, she pointed to how the increasing demand for molecular testing is being met by innovations like multiplexing, which is the ability to detect multiple pathogens within a single sample. “We have respiratory virus tests that can detect COVID and flu and RSV viruses all within the same test,” she said. “That reduces the reagent use and lowers your consumable cost while also increasing throughput.” The broader goal, she noted, is building diagnostic systems that are simultaneously faster, more reliable, more affordable, and capable of handling the volume demands of high-throughput clinical and research environments.
Maya offered a measured perspective on AI’s current capabilities, drawing a clear line between where the technology performs well and where it has room for growth.
AI tends to be most effective, she said, when working with large, well-structured datasets toward a defined predictive outcome, such as pattern recognition in biological data, quality monitoring in experimental workflows, and domain-specific assistants that help researchers navigate documentation or troubleshoot instruments. The benefit, she noted, is improving the user experience and reducing manual touchpoints.
The limitations, however, are equally important for technology decision makers to understand. “When it comes to large language models, specifically the risk of hallucinations and its non-deterministic nature — where it can make up things or not say the same thing each time — can be a barrier to adoption in scientific or healthcare settings.” Her prescription is a hybrid approach: one that keeps human expertise in the loop by design, even as agentic AI systems grow more capable of autonomous workflows.
Building AI-enabled diagnostic products is not simply a technical challenge. Maya outlined a layered set of constraints that shape every deployment decision, starting with data governance. Healthcare datasets often contain sensitive patient or genomic information. Considerations for privacy affect how data can be accessed, shared, and used in ways that go well beyond standard HIPAA compliance.
There are also practical deployment decisions with regulatory implications: whether AI systems run in the cloud or directly on an instrument, and how factors like connectivity and latency influence what’s feasible. And once a model is deployed, the work isn’t over. “Teams need some kind of post-market surveillance plan,” she said, “which requires a strong model observability service where they can monitor the performance of the model and identify any drifts.” In practice, applying AI in this space means balancing innovation against a set of strenuous operational and regulatory realities.
Before AI can meaningfully contribute to product development or diagnostics, Maya emphasized that organizations need to get their data house in order. That begins with rigorous data curation, ensuring experimental data is well-annotated and collected consistently so models can learn real patterns rather than artifacts of poor methodology.
Accessibility is the other piece. In many research organizations, data is scattered across instruments, labs, and databases with no unified infrastructure to bring it together. Maya pointed to large open biomedical datasets such as the Cancer Genome Atlas curated by the National Institutes of Health as important resources the research community already relies on. Looking ahead, she sees federated data approaches, which enable collaboration without requiring the sharing of raw patient data, as critical to accelerating AI’s role in diagnostics at scale.
While grounded in biomedical engineering, Maya’s perspective reflects broadly applicable lessons: the most durable AI deployments are built on disciplined data practices, realistic expectations, and a clear-eyed understanding of evolving regulatory requirements. In a field where the stakes are measured in patient outcomes, the pressure to get it right is acute. If AI lives up to its potential in life sciences, the payoff won’t just be operational efficiency. It will be earlier diagnoses, more personalized treatments, and meaningfully better quality of life for patients facing some of the most challenging medical conditions.

There’s a conversation happening at the edges of the AI infrastructure world that hasn’t quite broken through to the mainstream yet. It’s not about which GPU cluster wins the benchmark race or which hyperscaler is adding the most capacity. It centers on something far more fundamental: the cost of moving data.
In a recent Data Insights episode, I sat down with Solidigm’s Jeniece Wnorowski and Nilesh Shah, VP of Business Development at ZeroPoint Technologies, to work through where this friction in modern AI systems lives.
Nilesh began with an often-overlooked aspect of data storage: the amount of power moving data takes. Moving a single bit of data from storage, through high-bandwidth memory or low-power double data rate (LPDDR), and into the on-chip static random access memory (SRAM) where computation actually happens costs roughly ten times more in power than performing the computation itself. That ratio explains why inference chip innovators like Groq, Cerebras and SambaNova are focusing on data movement and memory hierarchies over compute.
Zero Point Technologies was founded on the premise that the need for data and memory is going to increase rapidly, and one of the ways to tackle that challenge is through lossless memory compression. By reducing the volume of data physically moving across the system, you increase the effective bandwidth and capacity of the compute engine.
On the question of whether AI workflows were being constructed correctly for the management of data, and how this could change as enterprises start scaling inference into different parts of their business, Nilesh pointed out that the key problem to be solved is agentic AI entering the workflow.
A pattern seen at recent tech conferences was that chip designers were integrating multiple specialized AI agents into a single electronic design automation (EDA) workflow, each handling a distinct task, like error detection or chip verification. This would mean having domain-specific inference solutions for even EDA operations, fundamentally changing the way enterprises will need to think about data.
As data becomes a challenge, memory bandwidth could become a bottleneck. Nilesh pointed out that agentic workflows and inference takes place in two stages, prefill and decode. The prefill stage processes the input prompt and is genuinely compute intensive. Modern GPU clusters handle this part reasonably well. The decode stage, where the output is generated, is extremely memory intensive and is what’s really limiting tokens per second.
When it comes to responsiveness at enterprise scale, say 100,000 employees simultaneously interacting at that scale across multiple streams of data, the decode phase becomes a real bottleneck. At NVIDIA GTC 2026, a lot of the keynotes revolved around developing heterogenous architectures that can manage the decode phase more efficiently.
We talked about when quantum computing would enter the picture. “What is the ChatGPT moment for quantum computing? That’s the favourite question I like to ask,” said Nilesh. He predicted that it could make sense to attach quantum processing units to data centers to efficiently offload some of the compute work that quantum tends to do well. There are currently examples of banks deploying early quantum computers, and another use case could be encryption and creating more secure encryption protocols.
When I asked Nilesh what he sees on the horizon for memory and storage technology, he outlined three distinct directions where investment and innovation are converging.
The first is alternative memory technologies. Dynamic random-access memory (DRAM) is a decades-old architecture that hasn’t changed fundamentally, and its limitations are starting to bite at exactly the moment AI workloads are scaling fastest. The second is new interfaces between memory and compute that will transform how memory communicates with the compute engine.
The third is the most significant shift in perspective: the unit of infrastructure design is moving from the chip, to the server, to the rack, and now to the data center as a single coherent system. Organizations are thinking about AI infrastructure in terms of megawatts allocated to a data center, with memory, storage, and compute all traded off within that power budget.
The biggest misconception, he felt, was the assumption that scaling AI output will keep being built on a proportional increase in power. “I expect a breakthrough that someone will come up with an entirely new style of physics that will break that linear assumption that to go from 100 LLMs to a million or going from a million users to 100 million, we’ll just multiple the megawatts of power,” he said.
My conversation with Nilesh clarified a change in direction I’ve noticed at many recent tech conferences. The 10x cost differential between moving data and computing on it is the reason the entire inference chip landscape looks the way it does. It’s a significant engineering constraint that companies like ZeroPoint are building directly against. The prefill-decode distinction matters because enterprises planning inference deployments at scale need to architect around the decode phase as a distinct bottleneck.
We’re excited to see what new innovations take place in the memory space, and if, as Nilesh believes, someone will eventually find a way to scale AI without the linear progression of more compute meaning more power.

Quantum computing conversations tend to get pulled toward the exotic: superposition, quantum entanglement, and what a world with exponentially faster compute will look like. But at the Xcelerated Computing Conference in New York, Solidigm’s Jeniece Wnorowski and I spoke with Burns Healey, quantum infrastructure lead for Dell, who offered a grounded perspective. For quantum technology to matter, it first needs a place to land, and that place is working alongside the classical data center.
It’s a framing that shapes everything about how Dell is approaching the market and one with practical implications for technology decision makers weighing when and how quantum fits into their roadmap.
Our conversation started with reconsidering the terms we use when we talk about quantum technology. “Quantum computers are almost a bit of a misnomer,” Burns said. “When you say quantum computer, I prefer to use the term quantum accelerator, because really that’s what they are. They’re an add-on to HPC or data center infrastructure that give you specialized options for computing specific workloads.”
This perspective that quantum technology is best considered as an extension of high-performance computing environments can be helpful to enterprise leaders who may feel pressure to engage with quantum. Organizations attempting to adopt quantum before they’ve pushed classical computing to its limits are, in his view, getting ahead of themselves.
“Going to a quantum computer before you’ve attempted to use classical HPC or large data center environments is a bit like trying to run before you’ve walked,” he said. “Only once you hit those limits in your data center, in your HPC environment, will you start to think about what quantum can do that you can’t currently do.”
Much of the early quantum conversation focused on physical qubit (quantum bit) counts and error rates, but recent conversations in quantum computing have shifted toward logical qubits and error correction as the field considers what usable quantum will look like.
Burns drew a direct analogy to classical computing. Just as error-correcting code allows applications to run more reliably, logical qubits aim to provide a stable, abstracted layer above the physical qubit substrate.
“The way we use them from a vendor and hardware supplier viewpoint is that we are going to aim to abstract away a lot of that physical layer complexity from the end user,” he said. “It’s a lowering the barrier to entry question in my mind, and the best way we can help onboard new people to the technology.”
When you think of quantum computers as quantum accelerators, the importance of the infrastructure that enables quantum and classical computing to work seamlessly becomes paramount. Rather than building quantum processing units (QPUs), Dell is helping produce the ecosystem and infrastructure appliances that will make quantum devices usable within real data center environments. A major challenge in that area is latency between quantum and classical systems. Burns pointed to Dell’s collaboration with NVIDIA as a current example of this work.
NVIDIA has developed a framework called NVQLink, designed to minimize the round-trip latency between QPUs and classical compute. Using NVQLink on Dell PowerEdge servers, the two companies recently demonstrated sub-4-microsecond latency, a result Burns described as meaningful progress toward the kind of tight integration that real quantum workloads will require.
“We’re really looking at what the technology needs in terms of specifications and hitting those targets to make this infrastructure usable for real quantum computing,” he said.
Dell is also engaged with quantum partners including QuEra and IQM, as well as a joint research initiative with Ernst & Young, all documented on Dell’s hybrid quantum-classical computing page.
When asked what needs to happen technically and operationally for quantum to move from research settings to deployable infrastructure, Burns identified two parallel tracks.
On the software side, progress is already underway. Frameworks like IBM’s open-source Qiskit are helping developers work with quantum gates and algorithms today. The next meaningful shift will come when developers can work at a Python-level abstraction, or eventually through application-specific tools that require no quantum expertise at all.
On the hardware side, cabling is one of the more pressing unsolved problems. Superconducting qubit systems require analog signals routed to each individual qubit. At 50 or 100 qubits, that is manageable. At thousands or millions of qubits, the cabling architecture becomes an issue. Ideas to address this include embedding classical components inside dilution refrigerators and more sophisticated multiplexing approaches, both of which introduce their own challenges.
Dell’s positioning in the quantum space is as perceptive as you would expect from one of the world’s classical computing giants. Rather than competing with QPU vendors, the company is focused on the infrastructure layer that will make quantum systems usable in real enterprise environments.
Burns’s framing of quantum as an accelerator, not a computer, is a useful corrective for organizations trying to calibrate their engagement with the technology. For most enterprises, the near-term question is not whether to adopt quantum, but how to ensure that classical infrastructure is ready when quantum workloads become viable. The organizations with the strongest HPC foundations will be best positioned to take advantage of it.
Listen to our conversation in the full podcast episode, and for more information about Dell’s hybrid quantum-classical computing work is available on Dell’s quantum computing site.

Back in 2015, the “godfather of AI” Geoffrey Hinton made a bold prediction: stop training radiologists immediately, because deep learning would render them obsolete within five years. A decade on, this looks unlikely to happen any time soon, and radiologists remain in just as much demand, showing how important accuracy and safety remain and the unique challenges in adopting AI in this space.
My recent conversation with Tapan Shah, AI Architect at Innovaccer and Agentic AI Work Group Lead at the Coalition for Health AI (CHAI), and our Data Insights co-host Jeniece Wnorowski from Solidigm, shed light on some of the challenges in creating scalable AI systems for healthcare. His role involves creating AI systems and agents that work in actual healthcare environments and enterprise systems that affect patient and provider outcomes.
In Tapan’s view, the hardest problem in healthcare AI is not creating the right models or algorithms, but in designing from the ground up.
Tapan opened with an example that cuts to the heart of the challenge. An AI clinical note generator built for a cardiology practice may work great in a pilot and then stumble when deployed for other disciplines like oncology or orthopedics, or even a different practice running a different electronic health record (EHR) system. Even when the underlying model remains the same, the results can be vastly different based on the medical discipline.
“Scaling AI into enterprise healthcare is less of an AI problem and more of a system design problem,” Tapan said. “The real problem here is whether in real-world situations, an AI agent being developed has the right level of access and the capability to create sufficiently transparent and explainable recommendations that even a skeptical clinician can accept.”
In the past decade, the healthcare AI industry has undergone a seismic shift from building predictive models to building agents. Historically, validating an AI system was relatively straightforward: train a model, measure accuracy on a holdout set, and deploy. This has been successfully validated in cases like early tumor detection, says Tapan.
Agents are a fundamentally different beast. They pull data from multiple data sources, invoke various tools, and combine these inputs to perform complex tasks. Often there is no single source of truth and clinicians can interpret the same data differently. Data can be missing or certain users cannot access certain tools or software. In this scenario, the challenge becomes ensuring that the agent being built is safe and can handle the scenario safely and predictably even in a novel scenario.
And because sensitive data is being handled, safeguards need to be built in the system from the get-go. For instance, a cardiology clinical note generator should not have access to a patient’s psychiatric records.
When the topic turned to governance, Tapan pushed back against the assumption that governance is primarily about controls and restrictions.
“AI governance is not a constraint, it’s enablement,” he said, comparing a good governance framework to a constitution: it can be used as a binding document, or it can serve as the foundation for doing genuinely useful things, based on how you build and use it.
He illustrated this with a scenario where an authorization agent shifted from a 70% auto-approval rate to a 90% auto-approval rate. Effective governance would mean detecting this shift, reviewing the agent’s complete decision graph and identifying the root cause. A successful governance model would enable such decision making to be made in minutes, rather than weeks.
The thorniest issue in the conversation was accountability, especially as AI agents take on decisions with both clinical and administrative consequences. Tapan was candid: there is no perfect solution yet. Legal frameworks are still catching up to the question of what it means for an AI agent to make a consequential decision.
Innovaccer’s current approach is to make sure that there is comprehensive logging of every AI decision, granular access control for agents, and human oversight with the ability to override. For all clinical use cases, and many administrative ones, a human remains in the loop, able to review and reverse any AI-generated decision. As legal and governance frameworks evolve, these foundations will provide the structure to adapt.
When asked about measuring long-term strategic value, Tapan pointed to two holy grails: improved patient and provider outcomes. Treatment authorizations are a good example of where AI intervention can help, he explained.
“There are cases where it can take upwards of two to three weeks for a prior authorization for a procedure, that leads to delay in care,” he said. “If we can bring that down to, let’s say, a day, less than a day, even a few minutes, it actually impacts patient outcomes and cost of care.”
On the other end, freeing clinicians of administrative burdens allows them to spend more of their time caring for patients, reducing burnout and stress levels.
And because healthcare AI serves multiple stakeholders including operations, compliance and clinical teams, a scalable solution would need to be designed with solid system design principles, with observability, tracing, and monitoring built in right from the very beginning.
Innovaccer’s approach demonstrates the challenges in building a successful system that can work across multiple specialties in real-life hospital scenarios. As integrating AI in healthcare has shifted from building models to building agents, the hardest problem to solve isn’t technical performance, but rather ensuring safety, accountability, and governance.
Tapan’s framing that governance should be treated as enablement, not constraint, feels like an important mindset shift for leaders trying to move beyond the pilot stage. By helping to reduce authorization times and administrative burden, AI can help provide long-term benefits such as better patient care and provider experience.
If you’re interested in learning more, check out the full podcast. In addition, the Department of Health and Human Services recently published updated guidelines for AI, and the CHAI and Innovaccer websites provide useful guidance on the use of agentic AI use in healthcare settings.

The moments that help define an employee's trajectory, including performance reviews and manager feedback, are too consequential to get wrong. AI promises to help managers be better prepared for these important conversations by presenting clear insights that draw from the sea of daily work data. But it can only deliver when it is trusted on all sides.
In my recent conversation with Maher Hanafi, senior vice president of engineering at Betterworks, and Solidigm’s Jeniece Wnorowski, we discussed what it takes to turn AI’s potential into a trusted and valued enterprise solution.
Betterworks describes itself as a talent and performance management platform for global enterprise customers, but Maher is quick to distinguish it from traditional HR software. Where legacy tools function as administrative record-keepers by tracking history, storing documents, and managing lists, Betterworks aims to orient its platform around the flow of work.
“We were looking at the data from a performance lens,” Maher explained. “We’re trying to enable anything that helps go beyond just tracking history…to focus more on the flow of work.” For large enterprises with complex organizational structures spanning multiple regions, that means helping individuals, managers, and business units connect their daily efforts to company-wide goals, a capability that only becomes more valuable, and more technically demanding, as AI matures.
Maher offered the useful frame of thinking about AI as enabling “horizontal intelligence.” Before AI, Betterworks’ modules — goals, feedback, one-on-one meetings, talent and skills — operated as largely separate domains. Generative AI has made it possible to interconnect those domains in ways that weren’t previously practical.
“With AI today, it’s just way easier to interconnect all of these,” he said. “I think SaaS products and SaaS platforms will be built as more of an interconnected set of layers that will break the silos between different components and features.”
In practical terms, this means a manager preparing for a one-on-one meeting can receive and review AI-generated insights drawn from an employee’s recent goals, feedback, and performance history before a conversation, rather than manually pulling together and examining months’ worth of data.
When AI provides insights that can influence such important conversations, it’s paramount that all parties can trust the system’s output. Operating in this environment, Betterworks has emphasized responsible AI guided by two principles in particular: transparency and explainability. Transparency means the system can show users what sources it drew on to generate a response. Explainability means users understand why an AI suggestion is what it is. With this foundation, when managers are giving feedback to employees based on information AI provides, they can make suggestions and have confidence in the underlying insights.
“We are trying to use AI as a way to really get you as a better individual, better member of the organization and contributing to the big picture versus having AI take control,” Maher said. “You should be in the driver’s seat. AI is just there to help you and be a co-pilot, nothing else.”
As the conversation turned to broader lessons, Maher offered practical guidance for engineering and technology leaders navigating AI adoption inside enterprise organizations.
His first recommendation is simply to stay informed without becoming overwhelmed. “AI is moving very fast…. Picking the one out of the haystack is very challenging,” he said. To manage that, he created what he calls an AI Engineering Lab at Betterworks, a structured environment where engineers could explore tools and run experiments, rather than waiting for top-down mandates on which technology to adopt.
He also urged leaders to take the financial dimension seriously. “There was a huge risk of AI taking too much money without achieving ROI,” he said. “Turning into someone who cares more about the financial aspect and looking at costs on a frequent basis…was a huge success.” In his view, senior technology leaders increasingly need to think with some of the rigor of a chief financial officer when it comes to managing AI infrastructure spend.
Finally, he pointed to the value of frameworks. His own AI maturity framework and a flywheel model focused on planning, building, and optimizing AI systems have helped keep the team oriented even as the technology underneath them continues to shift.
Maher’s perspective reflects a measured but substantive view of what AI can deliver in enterprise software, one grounded in the realities of compliance-heavy industries and the organizational complexity of global customers. Rather than positioning AI as a transformation layer bolted onto an existing product, Betterworks has committed to rebuilding the platform’s foundations to make intelligence a native capability. For technology decision makers evaluating AI-powered SaaS in regulated environments, the Betterworks story offers a useful model.
Learn more about Betterworks at betterworks.com, and watch our full podcast episode.

ZeroPoint’s Nilesh Shah explores why data movement, compression, and memory bandwidth now shape AI inference performance, and where heterogeneous systems and quantum may fit next.

AI architect Tapan Shah joins the Data Insights podcast to discuss scaling AI in healthcare, governance, AI agents, and how technology can improve patient outcomes and reduce provider burnout.

Maher Hanafi of Betterworks joins TechArena Data Insights to discuss AI in enterprise SaaS, why many AI proof-of-concepts fail, and how engineering leaders can successfully move AI into production.

As organizations build private AI clouds to control costs and protect their data, they face a familiar dilemma: the trade-off between performance and operational simplicity. Hyperscalers (like AWS or Google) have both, but only because they have armies of engineers to build custom software that tames their hardware.
My recent conversation with Solidigm's Jeniece Wnorowski and Marc Austin, CEO and co-founder of Hedgehog, revealed how enterprises can now access that same "Hyperscaler Agility"—without the army of engineers.
The key? Decoupling the control plane from the hardware.
Hedgehog’s mission centers on enabling enterprises, government agencies, and neoclouds to “network like a hyperscaler.” This means moving beyond rigid trade-offs. Instead of being forced to choose between the stability of validated reference architectures or the flexibility of open standards, Hedgehog allows organizations to leverage both, orchestrated by a single software platform.
This approach offers a massive strategic advantage: Supply Chain Resilience.
As Marc explained, a diversified hardware strategy is critical for risk management. “If you have a supply shock—like a global pandemic or a trade war—that can limit your ability to scale because supply becomes constrained,” he noted. “You can’t add capacity to your network when you need to.”
By running open-source software on OCP standards-based servers, organizations can acquire equipment from whichever vendor offers the best price and availability at that moment. And because Hedgehog’s control plane is hardware-agnostic, it can eventually extend this same flexibility to other high-performance reference architectures, ensuring that the software experience remains consistent regardless of the underlying silicon.
Hardware diversity is only half the battle; the other half is operational speed. Hedgehog delivers all the software needed to automatically install, configure, and operate AI networks as a turnkey "appliance." This eliminates weeks of manual configuration work by network architects.
More importantly, it democratizes access. By providing a Virtual Private Cloud (VPC) service, Hedgehog allows enterprise users or neocloud tenants to operate within a private, secure segment—consuming on-premise AI infrastructure with the same self-service ease they expect from a public cloud provider.
The power of this "Universal Control Plane" is evident in how customers are using it to bypass traditional infrastructure bottlenecks.
Zipline, an automated drone delivery company, utilized Hedgehog to build a private cloud that cut infrastructure costs by 70% while keeping their delivery data secure. The critical win wasn't just the hardware savings—it was the operational model. They managed the deployment with their existing DevOps team, without hiring specialized network engineers, because Hedgehog abstracted the physical switching complexity into simple software commands.
In the high-performance arena, FarmGPU (operating the Solidigm AI Central Lab) used Hedgehog to orchestrate an 800G fabric for AI training. Independent testing by SemiAnalysis highlighted that Hedgehog’s software-defined congestion management maximized bandwidth and GPU utilization.
This proves a vital point for the future of AI: The software you use to manage the network matters just as much as the wire itself.
Agility isn't just about the switch fabric; it's about how data enters the building. FarmGPU faced a challenge familiar to many AI operators: ingesting terabytes of training data through a limited enterprise firewall.
Legacy solutions required expensive, proprietary hardware routers. Hedgehog’s software-defined gateway turns standard x86 servers into high-performance routers. This effectively brings the functionality of a public cloud "Transit Gateway" on-premise, allowing secure, multi-tenant segmentation for AI workloads.
Hedgehog is redefining the role of the network in the AI stack. By focusing on a hardware-agnostic control plane, they are ensuring that the "Brain" of the network (the automation) is distinct from the "Body" (the switch).
This is the architecture of the future. It gives enterprises the ultimate luxury: Choice. It allows IT leaders to select the best hardware for their specific workload—optimizing for cost, performance, or supply chain availability—while maintaining a consistent, automated operating experience across the entire fleet.
For organizations that view data as their competitive moat, this ability to unify diverse infrastructure under one automated standard is the key to scaling AI.