Caduceus symbol with timeline visualization morphing from slow paperwork stacks into instant digital approvals, patient care accelerated, motion blur showing speed, modern healthcare environment, hopeful tone.
Allyson Klein
@
TechArena
Apr 13, 2026

Innovaccer Reframes AI Governance as Healthcare’s Accelerator

Back in 2015, the “godfather of AI” Geoffrey Hinton made a bold prediction: stop training radiologists immediately, because deep learning would render them obsolete within five years. A decade on, this looks unlikely to happen any time soon, and radiologists remain in just as much demand, showing how important accuracy and safety remain and the unique challenges in adopting AI in this space.

My recent conversation with Tapan Shah, AI Architect at Innovaccer and Agentic AI Work Group Lead at the Coalition for Health AI (CHAI), and our Data Insights co-host Jeniece Wnorowski from Solidigm, shed light on some of the challenges in creating scalable AI systems for healthcare. His role involves creating AI systems and agents that work in actual healthcare environments and enterprise systems that affect patient and provider outcomes.

In Tapan’s view, the hardest problem in healthcare AI is not creating the right models or algorithms, but in designing from the ground up.

The Gap Between Pilot and Production

Tapan opened with an example that cuts to the heart of the challenge. An AI clinical note generator built for a cardiology practice may work great in a pilot and then stumble when deployed for other disciplines like oncology or orthopedics, or even a different practice running a different electronic health record (EHR) system. Even when the underlying model remains the same, the results can be vastly different based on the medical discipline.

“Scaling AI into enterprise healthcare is less of an AI problem and more of a system design problem,” Tapan said. “The real problem here is whether in real-world situations, an AI agent being developed has the right level of access and the capability to create sufficiently transparent and explainable recommendations that even a skeptical clinician can accept.”

From Building Models to Agents

In the past decade, the healthcare AI industry has undergone a seismic shift from building predictive models to building agents. Historically, validating an AI system was relatively straightforward: train a model, measure accuracy on a holdout set, and deploy. This has been successfully validated in cases like early tumor detection, says Tapan.

Agents are a fundamentally different beast. They pull data from multiple data sources, invoke various tools, and combine these inputs to perform complex tasks. Often there is no single source of truth and clinicians can interpret the same data differently. Data can be missing or certain users cannot access certain tools or software. In this scenario, the challenge becomes ensuring that the agent being built is safe and can handle the scenario safely and predictably even in a novel scenario.

And because sensitive data is being handled, safeguards need to be built in the system from the get-go. For instance, a cardiology clinical note generator should not have access to a patient’s psychiatric records.

Governance as Enablement, not Constraint

When the topic turned to governance, Tapan pushed back against the assumption that governance is primarily about controls and restrictions.  

“AI governance is not a constraint, it’s enablement,” he said, comparing a good governance framework to a constitution: it can be used as a binding document, or it can serve as the foundation for doing genuinely useful things, based on how you build and use it.

He illustrated this with a scenario where an authorization agent shifted from a 70% auto-approval rate to a 90% auto-approval rate. Effective governance would mean detecting this shift, reviewing the agent’s complete decision graph and identifying the root cause. A successful governance model would enable such decision making to be made in minutes, rather than weeks.

Clinical Consequences

The thorniest issue in the conversation was accountability, especially as AI agents take on decisions with both clinical and administrative consequences. Tapan was candid: there is no perfect solution yet. Legal frameworks are still catching up to the question of what it means for an AI agent to make a consequential decision.

Innovaccer’s current approach is to make sure that there is comprehensive logging of every AI decision, granular access control for agents, and human oversight with the ability to override. For all clinical use cases, and many administrative ones, a human remains in the loop, able to review and reverse any AI-generated decision. As legal and governance frameworks evolve, these foundations will provide the structure to adapt.

Long-term Value  

When asked about measuring long-term strategic value, Tapan pointed to two holy grails: improved patient and provider outcomes. Treatment authorizations are a good example of where AI intervention can help, he explained.  

“There are cases where it can take upwards of two to three weeks for a prior authorization for a procedure, that leads to delay in care,” he said. “If we can bring that down to, let’s say, a day, less than a day, even a few minutes, it actually impacts patient outcomes and cost of care.”  

On the other end, freeing clinicians of administrative burdens allows them to spend more of their time caring for patients, reducing burnout and stress levels.

And because healthcare AI serves multiple stakeholders including operations, compliance and clinical teams, a scalable solution would need to be designed with solid system design principles, with observability, tracing, and monitoring built in right from the very beginning.

The TechArena Take

Innovaccer’s approach demonstrates the challenges in building a successful system that can work across multiple specialties in real-life hospital scenarios. As integrating AI in healthcare has shifted from building models to building agents, the hardest problem to solve isn’t technical performance, but rather ensuring safety, accountability, and governance.  

Tapan’s framing that governance should be treated as enablement, not constraint, feels like an important mindset shift for leaders trying to move beyond the pilot stage. By helping to reduce authorization times and administrative burden, AI can help provide long-term benefits such as better patient care and provider experience.

If you’re interested in learning more, check out the full podcast. In addition, the Department of Health and Human Services recently published updated guidelines for AI, and the CHAI and Innovaccer websites provide useful guidance on the use of agentic AI use in healthcare settings.

Subscribe to Our Newsletter

Read the latest in the world of AI, data center, and edge innovation.