Season 6: Episode #181
Podcast with Dr. Girish N. Nadkarni, Chief AI Officer, Mount Sinai Health System
Share
In this episode, Dr. Girish N. Nadkarni, Chief AI Officer at Mount Sinai Health System, discusses his background as a physician-technologist and his vision for AI in healthcare as a tool that augments rather than replaces clinicians.
Dr. Nadkarni shares insights on how AI is reshaping healthcare. He describes AI as an ‘arbitrage of knowledge for time,’ enabling physicians to spend less time on administrative work and more time with patients. He also shares real-world examples including ambient AI scribes that eliminate manual note-taking and predictive models that detect patient deterioration hours before it occurs, especially in critical care units such as the NICU. Dr. Nadkarni distinguishes predictive AI’s deterministic approach from generative AI’s flexible, non-linear potential, emphasizing the need for governance, ethics, and trust in deploying both. He outlines Mount Sinai’s cross-functional framework—spanning care, operations, workforce, and research—supported by an assurance lab to monitor bias and safety.
Dr. Nadkarni highlights current AI applications that save physicians time and enable proactive care, while predicting future developments will include multimodal integration of text, voice, images, and video to better reflect clinical decision-making processes. Take a listen.
Video Podcast and Extracts
About Our Guest
Dr. Girish N. Nadkarni is the Fishberg Professor of Medicine, Chair of the Windreich Department of Artificial Intelligence and Human Health, and Chief AI Officer of the Mount Sinai Health System. A physician-scientist and clinical informaticist, he leads transformative AI research in cardiovascular and kidney care, including the first FDA-approved AI-bioprognostic for kidney disease. He has over 460 publications, 40,000 citations, and an h-index of 93. As PI on ~$40M in grants and contracts, he also co-founded multiple FDA-cleared AI startups. Dr. Nadkarni serves nationally in AI leadership and mentors future faculty leaders, earning numerous awards for research and innovation.
Recent Episodes
Rohit: Hi Girish. Welcome to The Big Unlock podcast. Really wonderful to have you here.
Girish: Hi Rohit, hi Ritu. Thanks for having me. It’s a pleasure.
Ritu: Welcome, Dr. Girish. We have a lot of episodes—175 and counting. Listeners are always in for a treat, and they enjoy hearing new perspectives on AI. Every physician has such different thoughts about AI, how they got here, and what they think future trends will be. So, I’m looking forward to an engaging and interesting discussion. Welcome once again.
Rohit: Thank you. I’m Rohit Mahajan, Managing Partner and CEO at Damo Consulting and BigRio.
Ritu: I’m Ritu M. Uberoy, currently based in Gurugram, India. Also, a Managing Partner at BigRio and Damo Consulting, and co-host of The Big Unlock podcast with Rohit. Looking forward to our discussion today. We’d love to start with your introduction.
Girish: Thanks, Ritu and Rohit. My name is Girish Nadkarni. I am the Chair of the Windreich Department of Artificial Intelligence at the Mount Sinai Health System. The department is the first of its kind in the country, established within a health system and medical school.
I’m also the Director of the Hasso Plattner Institute for Digital Health at Mount Sinai. Those are my academic roles. In addition, I serve as the Chief AI Officer for the health system. Clinically, I’m an internist and nephrologist, and I still actively see patients about 25% of my time—both in outpatient settings and on the clinical floors.
Most of my time is spent doing research, but more importantly, translating, deploying, and scaling that research into clinical care and operations.
I grew up in Bombay, India, which some of you may know is one of the largest cities in the country. I live in New York City now, but the scale of Bombay—or Mumbai—can put New York City to shame, just in terms of the number of people.
I went to medical school in India. From a young age, I was interested in tech and computer science. As you both know, in India you typically either go into engineering or medicine. My parents made it clear: you’re either a doctor or an engineer—or you’re a failure. Both of my parents were doctors, and although I cleared the engineering entrance exam, I was drawn to medicine.
Medicine is a noble profession. What’s more noble than helping and healing people when they’re most vulnerable? More importantly, I was interested in prevention—helping people before they became that sick.
During medical school in India, I always wanted to further my training in the United States because it has led the world in innovation at the intersection of medicine and technology. I came to the U.S., completed a master’s degree and postdoctoral fellowship at Johns Hopkins, and since 2009 I’ve been in New York City.
I did my residency, fellowship, research, operational training, and faculty roles all at Mount Sinai. The health system is unique because it combines a medical school and a health system, which creates a free-flowing exchange between academic discoveries and clinical operations. Research can be translated and scaled across the system, while clinical practice inspires new basic research questions.
I feel fortunate to be in this role. Healthcare is a field where change is often imposed externally—by policies or new technology—but meaningful change must also come from within. If clinicians broadly—physicians, nurses, and other health professionals—become true evangelists for responsible tech, we can create transformative change in healthcare like never before.
Ritu: That’s a great introduction. Thank you so much, Girish. I’m really curious—right now with AI and the rapid pace of change, especially with LLMs, we’re hearing about a lot of physicians and clinicians getting into tech. But your interest in tech predated all of this. Tell us a little bit about that—how you felt back then versus now.
Today, people feel that AI is so enabling. It has democratized access to technology and made it easier for non-tech people, especially clinicians and physicians, to prototype and build things they couldn’t before. Do you feel the same way? Do you think the pace of change has been like light years’ worth?
Girish: I’ve been in this field for the last 15 years, and my interest in tech has always been for medicine’s sake. Technology is one way you can scale impact in ways a single physician—or even a single healthcare system—cannot. Tech can touch millions of lives at scale, which is what drew me to it.
The idea is to take what we know from medicine and codify it into a knowledge base that can then scale across demographics, populations, countries—even globally. That’s the promise of tech.
Yes, things have gotten easier in terms of building and prototyping because of large language models. If you think about what LLMs are, they’re essentially encoded human knowledge at scale. They’ve read all of human knowledge, compressed it, and made it possible to converse with and ask the right questions to get back knowledge at scale.
This democratizes access. Now, if a physician has an idea, they don’t need to rely on a lengthy process to prototype it. They can bring their idea to life as an app or a product and test how it works.
But this also introduces unique challenges. Let me start with a couple of assumptions I hope we all agree on. First, healthcare is an industry where trust is paramount—trust between provider and patient, and trust between the system and the broader social and cultural environment it operates in. We must safeguard that trust.
Second, while scale is useful, it has a flip side. If goodness can scale, badness can scale too. A useful, effective product can scale—but so can mistakes or bias. That’s why it’s essential to be rigorous, responsible, and trustworthy in healthcare. This is a trust-sensitive, patient-centric industry, and we cannot allow mistakes or bias to scale unchecked.
So yes, prototyping and scaling make sense—but they must be backed by rigor, good governance, and solid evidence.
Ritu: Great. That’s what we’ve been hearing across the board—it comes with its own baggage, and you have to be really careful to, as you said, have rigor, responsibility, and trust, which are paramount. Great answer, thank you. Rohit, would you like to ask a question?
Rohit: Yeah, Girish. Could you tell us a little more about your AI journey, or share examples you’ve seen that have positively impacted patients—something we could talk about today?
Girish: Oh, absolutely. I can talk about the broader AI landscape in healthcare right now. I think healthcare is at a transition point. The U.S. healthcare system costs a lot of money and is focused predominantly on sick care rather than healthcare. We wait for people to get sick, then we treat the sickness. The financial incentives are also aligned that way—and that needs to change.
The promise of AI, in my view, is that it’s an “arbitrage of knowledge for time.” What I mean is that because knowledge is compressed and encoded, you can use it to buy time for making decisions.
A great example in healthcare is ambient scribes. As a physician, suppose I have a 20-minute patient visit. In practice, I spend 10 minutes talking to and examining the patient, then another 10 minutes typing up my note. Alternatively, I could spend the full 20 minutes with the patient—but then I’d have to go home after putting my kids to bed and finish my notes.
Ambient AI scribes flip that paradigm. During a natural clinical conversation with a patient, you don’t have to type into the computer. The entire interaction is transcribed into a structured note. All you have to do is review it for accuracy and sign off. What used to take 10 minutes per patient now takes 30 seconds. That’s what I mean by arbitraging knowledge for time—you gain time back with every patient and avoid doing data entry after hours.
Another example: imagine if you knew a patient was going to deteriorate in the hospital one or two hours before it happened. That time would allow you to make a life-saving intervention. That’s another way AI arbitrages knowledge for time—by shifting care from reactive to proactive.
We developed a deterioration model that predicts if a patient is going to get sick an hour in advance. It allows clinicians to intervene, maybe give IV fluids or switch antibiotics, before the patient worsens. That model is already in production.
A second example, very close to my heart, is in the neonatal intensive care unit, where premature babies are often the sickest patients in the hospital. Using vision AI—similar to the technology in self-driving cars—we can continuously monitor babies and predict an hour in advance if one is going to get sick. That gives care teams time to act and potentially prevent deterioration.
So again, I like to say AI in healthcare is an arbitrage of knowledge for time. It gives providers time to make critical decisions and also gives them back time in their day that would otherwise be lost to data entry and processing tasks.
Ritu: Amazing. This neonatal ICU example is incredible. I can only imagine how many babies’ lives have been positively impacted. That’s a wonderful story—thank you for sharing, Girish.
Rohit: Girish, I think we were talking the other day about traditional AI and the new AI—generative AI and large language models. How do you distinguish between the two in your team? And how do you build teams for AI work, since these tasks often require cross-functional expertise—problem definition, technical skills, and domain knowledge? How do you structure that at your organization?
Girish: That’s a big question, but I’ll try to answer it in parts. AI has been used in medicine for a while now, but most of it has been predictive AI. It’s been, “Given this set of data, what is the probability that something good or bad will happen to the patient in a certain amount of time?”
The difference between that and generative AI is that predictive AI is usually deterministic. If you enter the same set of features, you’ll get the same answer every time—it’s very repeatable. New AI, on the other hand, is much more flexible and non-deterministic. You can get different answers to the same question depending on context.
It’s almost like the left and right sides of the brain. Predictive AI is extremely logical, follows a process, and arrives at an answer. Generative AI is more creative and non-linear. Both have to be governed differently, and the way you evaluate them is also different.
With predictive AI, you focus on making sure it isn’t biased against certain populations, and that when you deploy and monitor it over time there’s no drift—meaning the model doesn’t degrade as the underlying data changes. Generative AI requires a more holistic evaluation. You have to ensure it aligns with a broader set of ethics and values, and put guardrails in place so it doesn’t deviate from them.
We’re setting up what we call an “assurance lab” to monitor both predictive and generative AI in different ways, but with the same underlying goal: AI must be safe, effective, responsible, and ethical.
And for that, you need cross-functional teams. You need someone who understands the clinical problem—that could be a doctor, nurse, medical assistant, or allied professional. Then you need someone who can build AI models at scale, usually an MLOps or AIOps engineer. You also need researchers, because sometimes you need people to think outside the box and explore different solutions. Finally, you need a layer of ethics—people who understand the risks and ensure the work is responsible.
We’ve set up a governance structure that combines all digital and AI applications, divided by domain. There’s an AI care domain focused on clinical care; an AI operations domain for back-office operations, finance, and regulatory; an AI workforce domain for workforce applications; a research domain; and a student domain. All of these are underpinned by the assurance lab and by the REP—Risk, Ethics, and Policy—committee.
Why is this important? Because even if an algorithm is pristine, its application can have issues. For example, you could have an algorithm to predict which patients won’t show up for clinic appointments. But if applied incorrectly, you might cancel the appointments of the very patients who most need care. Often, patients who miss appointments have unmet social needs. The right solution isn’t to cancel on them—it’s to address those needs, like providing transportation or financial support.
So yes, you need cross-functional and multidimensional teams to tackle clinical, operational, or workforce problems. But everything must run through governance to ensure the work is safe, effective, responsible, and ethical.
Rohit: That’s great to know. And when you look at these applications, do you have some kind of framework or success metrics—what people often call return on investment? How do you go about prioritizing projects, especially now that so many physicians are becoming interested and more ideas are coming your way as Chief AI Officer?
Girish: Yes. We have a process where anyone in the health system can submit an AI or digital idea. I work closely with the Chief Digital Transformation Officer, Robbie Freeman, and our Chief Digital Officer, Lisa Stump, on this.
First, as a health system and academic institution, we define a list of clinical and operational priorities for the next 12–24 months. Part of the process is aligning ideas to those priorities. If an idea aligns with strategic priorities, it gets higher prioritization and faster execution.
Second, every idea must go through governance, as I described earlier, to ensure it is safe, effective, ethical, and responsible—not only in development, but also in execution.
Third, we calculate ROI for all priority projects. But this is a multidimensional ROI—it’s not just financial. We also consider workforce impact and patient experience impact. It’s a holistic, 360-degree evaluation.
We also monitor projects over time to ensure they meet predefined milestones. For example, with an ambient AI project, one milestone might be provider satisfaction. How many providers are satisfied with it? We track KPIs, and if a milestone isn’t met, we require an explanation and a remediation plan.
This way, we ensure ROI is not just about finances—it’s about employee experience, broadly defined, and patient experience as well.
Rohit: And one tough problem I’ve seen before—and something the C-suite often struggles with—is where the money comes from. If you’re focused on cost savings, you might say, “We’re going to save costs by becoming more efficient.” That’s one way to look at it. But how do you think about budgeting for AI? Is it longer-term or shorter-term? Any high-level perspective you can share?
Girish: If you align your roadmap—your operational map—with the larger strategic priorities, then you automatically get C-suite buy-in. These are priorities already set by leadership, and we’re just enabling and accelerating them. That also makes it easier because budgets have already been allocated for those priorities.
And honestly, the cost of generating software has gone down significantly. You know this better than me—the cost of coding agents and developing software is far lower than it used to be. So if incentives, strategic priorities, and broader vision are aligned, the more tactical, lower-scale items start to align as well.
Rohit: That’s great to know.
Ritu: So Girish, you mentioned “ambient” a couple of times, and we all know that’s been one of the big success stories across hospitals and health systems. Our listeners always want to hear what’s coming next—straight from our guests. In terms of trends, what do you think will be the next big win for AI? Where is it headed in the next 6–12 months?
Girish: In the next 12 months, I think we’ll see much more multimodal integration, which is especially relevant in medicine. Right now, most AI is text-based. But that’s not how clinical decisions are made. Clinical decisions involve talking to the patient, observing them, noting how they look, how they speak, and even their frame of mind in that moment.
The technology isn’t fully there yet, but I think it will be in the next few years. And I still believe that in the short term—say, the next 10 years—it’s about augmentation rather than replacement. These systems, while powerful, are fragile, and trust is a huge factor. People aren’t ready for a fully autonomous AI doctor.
So I see multimodal integration—combining text, voice, images—becoming more common. We’ll see a push beyond ambient recording, perhaps toward video recording with patient consent, sensors, and more. Broadly, though, I think the shift will be from reactive medicine to proactive medicine, and AI in all its forms will be an enabler of that.
Ritu: Great point. We’ve also been hearing a lot about voice agents and conversational AI, so I think you’ve hit the nail on the head there. Thank you.
Rohit: Thank you, Girish. As we come to the close of the podcast, are there any final thoughts you’d like to share with our listeners?
Girish: I’d just say this: AI is a very impressive technology. For the first time in human history, it’s approaching cognition, which until now was the preserve of humans. We should think of AI as a collaborator, not a replacement—something that helps us become the best version of ourselves.
At the same time, we shouldn’t wear blinders and assume it’s flawless. We need honest assessments, constant monitoring, and recognition of its flaws. There’s hope, and yes, there’s hype, but it should always be tempered by reality and rigor.
Rohit: That’s a great closing thought. Thank you so much, Girish. We really appreciate your time.
Ritu: It was wonderful having you on the show. Thank you for joining us.
Girish: Thank you both. It was a pleasure.
————
Subscribe to our podcast series at www.thebigunlock.com and write us at [email protected]
Disclaimer: This Q&A has been derived from the podcast transcript and has been edited for readability and clarity.
About the host
Paddy is the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy is also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He is the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He is widely published and has a by-lined column in CIO Magazine and other respected industry publications.
Rohit Mahajan is an entrepreneur and a leader in the information technology and software industry. His focus lies in the field of artificial intelligence and digital transformation. He has also written a book on Quantum Care, A Deep Dive into AI for Health Delivery and Research that has been published and has been trending #1 in several categories on Amazon.
Rohit is skilled in business and IT strategy, M&A, Sales & Marketing and Global Delivery. He holds a bachelor’s degree in Electronics and Communications Engineering, is a Wharton School Fellow and a graduate from the Harvard Business School.
Rohit is the CEO of Damo, Managing Partner and CEO of BigRio, the President at Citadel Discovery, Advisor at CarTwin, Managing Partner at C2R Tech, and Founder at BetterLungs. He has previously also worked with IBM and Wipro. He completed his executive education programs in AI in Business and Healthcare from MIT Sloan, MIT CSAIL and Harvard School of Public Health. He has completed the Global Healthcare Leaders Program from Harvard Medical School.
Ritu M. Uberoy has over twenty-five years of experience in the software and information technology industry in the United States and in India. She established Saviance Technologies in India and has been involved in the delivery of several successful software projects and products to clients in various industry segments.
Ritu completed AI for Health Care: Concepts and Applications from the Harvard T.H. Chan School of Public Health and Applied Generative AI for Digital Transformation from MIT Professional Education. She has successfully taught Gen AI concepts in a classroom setting in Houston and in workshop settings to C-Suite leaders in Boston and Cleveland. She attended HIMSS in March 2024 at Orlando and the Imagination in Action AI Summit at MIT in April 2024. She is also responsible for the GenAI Center of Excellence at BigRio and DigiMTM Digital Maturity Model and Assessment at Damo.
Ritu earned her Bachelor’s degree in Computer Science from Delhi Institute of Technology (now NSIT) and a Master’s degree in Computer Science from Santa Clara University in California. She has participated in the Fellow’s program at The Wharton School, University of Pennsylvania.
Paddy was the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy was also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He was the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He was widely published and had a by-lined column in CIO Magazine and other respected industry publications.
Stay informed on the latest in digital health innovation and digital transformation