From Reactive to Proactive Care with AI in Healthcare

From Reactive to Proactive Care with AI in Healthcare

In a recent episode of The Big Unlock podcast, Dr. Girish N. Nadkarni – Fishberg Professor of Medicine, Chair of the Windreich Department of Artificial Intelligence and Human Health, and Chief AI Officer of the Mount Sinai Health System — offered a clear, pragmatic perspective on how technology can reshape clinical care. Drawing on clinical practice, research, and operational leadership, Dr. Nadkarni emphasized that the real promise of digital health is not novelty for novelty’s sake, but the ability to scale medical knowledge and buy clinicians time to make better decisions.

From Clinician to AI Leader

Dr. Nadkarni’s professional background informs his practical outlook. A trained internist and nephrologist who still spends roughly 25 percent of his time in patient care, he described how his clinical work and academic research intersect in a health system that uniquely integrates its medical school and clinical operations. That integration, he explained, creates a virtuous cycle: clinical problems inspire research questions, and academic breakthroughs can be translated, deployed, and scaled across the system. This operating model gives his perspectives an operational realism that is often missing in more theoretical discussions about AI.

AI as an “Arbitrage of Knowledge for Time”

One of the episode’s most resonant themes was Dr. Nadkarni’s description of AI as an “arbitrage of knowledge for time.” He used concrete examples to unpack this idea. Ambient AI scribes, for instance, transcribe clinical conversations and generate structured notes that clinicians can review and sign. What previously consumed ten minutes per patient for documentation can be reduced to a matter of seconds — Dr. Nadkarni cited a shift from about ten minutes to roughly 30 seconds to finalize notes. That regained time is not trivial: it removes after-hours clerical burden, improves clinician well-being, and returns face-to-face time to patients.

Another application moves care from reactive to proactive. Dr. Nadkarni described a deterioration model deployed in production that predicts patient decline an hour before it happens. That early warning allows teams to intervene — adjusting fluids, antibiotics, or other treatments — potentially averting serious deterioration. Similarly, in the neonatal intensive care unit, vision-based AI systems can continuously monitor infants and flag signs of impending illness well before clinical deterioration. In these cases, the system provides a crucial window of time for lifesaving interventions: that is the practical meaning of arbitraging knowledge for time.

Trust, Scale, and the Double-edged Nature of Technology

Dr. Nadkarni was careful to balance enthusiasm with caution. He stressed that healthcare is a trust-sensitive industry: “trust between a provider and a patient… is paramount.” Scale multiplies both benefits and risks. “If goodness can scale, badness can scale as well,” he warned. A well-intentioned model that contains bias or is applied without contextual safeguards can propagate harm at scale. He used an illustrative example: an algorithm predicting which patients might not show up for clinic appointments could, if misapplied, lead to cancelling appointments for patients who most need care — often the very people experiencing social barriers to access. The right response, he argued, is to address unmet social needs (transport, financial assistance) rather than withdraw services.

To mitigate such risks, Dr. Nadkarni outlined governance mechanisms: an assurance lab to monitor models and a REP (Risk, Ethics, and Policy) committee to evaluate applications. He argued that prototyping and scaling are valid ambitions only if paired with rigor, evidence, and thoughtful governance — a framework that ensures AI systems are safe, effective, responsible, and ethical.

Predictive AI Versus Generative AI: Different Characters, Different Guards

The podcast highlighted the distinction between predictive AI and newer generative models. Predictive models tend to be deterministic and repeatable: the same input yields the same output, which lends itself to reproducible evaluation and monitoring for drift and bias over time. Generative models and large language models, by contrast, are more flexible and non-deterministic; their outputs can vary and require broader evaluation against ethical principles and guardrails.

Dr. Nadkarni described this contrast metaphorically as akin to “left and right sides of the brain” — one logical and structured, the other creative and non-linear. For institutions, the implication is that evaluation and governance strategies must be tailored to the technology class. Both model types require monitoring, but the methods and metrics differ.

Operationalizing AI: Teams, Governance, and Multidimensional ROI

Operational readiness is a recurring theme for leaders trying to move from pilots to scale. Dr. Nadkarni emphasized the importance of cross-functional and multidimensional teams that blend clinical domain expertise with technical capability. Effective AI teams include clinicians or allied professionals who understand the problem, MLOps or AI-ops engineers who build scalable models, research scientists who explore novel approaches, and ethicists who assess risk and fairness.

His description of Mount Sinai’s governance architecture is instructive: AI applications are grouped into domains — clinical care, operations, workforce, research, and students — and supported by an assurance function and a REP committee. This structure enables domain-specific workflows while maintaining centralized oversight.

On prioritization, Mount Sinai requires submissions of ideas from across the health system and aligns them against a 12–24 month set of strategic priorities. Projects are evaluated not only for financial return but for workforce impact, patient experience, and other non-monetary dimensions. This “360-degree” ROI approach ensures that success metrics include provider satisfaction, patient outcomes, and adherence to ethical milestones — with remediation plans if KPIs fall short.

Where Next: Multimodal Integration and Augmentation

Dr. Nadkarni expects the next wave to be multimodal AI – systems that combine text, voice, image, and sensor data. Medicine is inherently multimodal — clinicians integrate visual inspection, conversation, and subtle behavioral cues when making judgments — and AI will need to mirror that complexity. He cautioned that progress will be incremental and that augmentation, not replacement, is the likely near-term path. “In the short term… it’s about augmentation rather than replacement,” he said, underscoring both the potential and the limitations of current systems.

In addition, leaders should prioritize transparency with patients and staff, obtain informed consent, clearly explain how AI-generated notes and alerts are used, and provide clinicians straightforward ways to review and correct machine outputs. Transparency builds trust and smooths adoption. Health systems must invest in continuous monitoring and learning loops so models improve over time and minimize drift or unexpected bias. Training programs to help clinicians interpret AI outputs and to act on recommendations will accelerate practical adoption. These operational investments — governance, monitoring, education, and workflow redesign — are essential complements to any technical innovation.

Practical Implications for Clinicians and Health Leaders

For clinicians and executives contemplating AI adoption, the episode offers several actionable takeaways. First, focus on problems that buy time or reduce friction — documentation automation and early warning systems are tangible, high-impact examples. Second, build multidisciplinary teams and governance processes before scaling. Third, adopt multidimensional ROI frameworks to capture patient and workforce effects, not just cost savings. Fourth, prioritize transparency and training so clinicians understand AI outputs and patients are informed about how data are used. Finally, monitor models continuously to detect drift, bias, and unintended consequences.

Dr. Nadkarni’s mantra — that AI is an “arbitrage of knowledge for time” — reframes technology as a time-saving, decision-enabling tool rather than a pure technical curiosity. By combining clinical insight, disciplined governance, and a patient-centered ROI lens, organizations can harness AI to shift care from reactive to proactive — while safeguarding trust and equity at scale.

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation.

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation.