An overhead, close-up photograph capturing a modern, cordless chrome and polished white optical microscope, set within a pristine, light, and airy lightroom environment with specular reflections.
Allyson Klein
@
TechArena
May 6, 2026

Taking AI to the Diagnostic Lab with Care

Maya Kalyan’s career has always lived at the intersection of disciplines. With a background in biomedical engineering and more than a decade in life sciences, she now serves as a staff algorithms and AI engineer in the molecular diagnostics space at Thermo Fisher Scientific. In a recent TechArena Data Insights episode, Solidigm’s Jeniece Wnorowski and I heard Maya offer a practitioner’s view on where AI is genuinely delivering value in healthcare, and where significant work remains.

Four Problems, One Goal

The starting point for any meaningful discussion of innovation in diagnostics, Maya explained, is understanding what problems the industry is actually trying to solve. She identified four primary areas of focus: accuracy and reliability, turnaround time, cost reduction, and automation.

On both cost and turnaround time, she pointed to how the increasing demand for molecular testing is being met by innovations like multiplexing, which is the ability to detect multiple pathogens within a single sample. “We have respiratory virus tests that can detect COVID and flu and RSV viruses all within the same test,” she said. “That reduces the reagent use and lowers your consumable cost while also increasing throughput.” The broader goal, she noted, is building diagnostic systems that are simultaneously faster, more reliable, more affordable, and capable of handling the volume demands of high-throughput clinical and research environments.

Where AI Works, and Where It Doesn’t

Maya offered a measured perspective on AI’s current capabilities, drawing a clear line between where the technology performs well and where it has room for growth.

AI tends to be most effective, she said, when working with large, well-structured datasets toward a defined predictive outcome, such as pattern recognition in biological data, quality monitoring in experimental workflows, and domain-specific assistants that help researchers navigate documentation or troubleshoot instruments. The benefit, she noted, is improving the user experience and reducing manual touchpoints.

The limitations, however, are equally important for technology decision makers to understand. “When it comes to large language models, specifically the risk of hallucinations and its non-deterministic nature — where it can make up things or not say the same thing each time — can be a barrier to adoption in scientific or healthcare settings.” Her prescription is a hybrid approach: one that keeps human expertise in the loop by design, even as agentic AI systems grow more capable of autonomous workflows.

Regulatory Reality

Building AI-enabled diagnostic products is not simply a technical challenge. Maya outlined a layered set of constraints that shape every deployment decision, starting with data governance. Healthcare datasets often contain sensitive patient or genomic information. Considerations for privacy affect how data can be accessed, shared, and used in ways that go well beyond standard HIPAA compliance.

There are also practical deployment decisions with regulatory implications: whether AI systems run in the cloud or directly on an instrument, and how factors like connectivity and latency influence what’s feasible. And once a model is deployed, the work isn’t over. “Teams need some kind of post-market surveillance plan,” she said, “which requires a strong model observability service where they can monitor the performance of the model and identify any drifts.” In practice, applying AI in this space means balancing innovation against a set of strenuous operational and regulatory realities.

The Data Foundation Problem

Before AI can meaningfully contribute to product development or diagnostics, Maya emphasized that organizations need to get their data house in order. That begins with rigorous data curation, ensuring experimental data is well-annotated and collected consistently so models can learn real patterns rather than artifacts of poor methodology.

Accessibility is the other piece. In many research organizations, data is scattered across instruments, labs, and databases with no unified infrastructure to bring it together. Maya pointed to large open biomedical datasets such as the Cancer Genome Atlas curated by the National Institutes of Health as important resources the research community already relies on. Looking ahead, she sees federated data approaches, which enable collaboration without requiring the sharing of raw patient data, as critical to accelerating AI’s role in diagnostics at scale.

The TechArena Take

While grounded in biomedical engineering, Maya’s perspective reflects broadly applicable lessons: the most durable AI deployments are built on disciplined data practices, realistic expectations, and a clear-eyed understanding of evolving regulatory requirements. In a field where the stakes are measured in patient outcomes, the pressure to get it right is acute. If AI lives up to its potential in life sciences, the payoff won’t just be operational efficiency. It will be earlier diagnoses, more personalized treatments, and meaningfully better quality of life for patients facing some of the most challenging medical conditions.

Subscribe to Our Newsletter

Read the latest in the world of AI, data center, and edge innovation.