Season 7
Episode 199 - Podcast with Dr. Eric Stecker, Co-founder and Chief Medical Officer, Insight Health -
Autonomous AI Turning Evidence into Action
In this episode, Dr. Eric Stecker, Co-founder and Chief Medical Officer at Insight Health, explores how autonomous AI agents are reshaping cardiovascular care and population health in the United States.
Dr. Stecker draws a critical distinction between autonomous action and autonomous decision-making, arguing that AI can deliver enormous clinical value today by acting autonomously on well-established care protocols, without waiting for fully autonomous diagnostic AI. He highlights that preventable conditions like hypertension and high cholesterol already have decades of evidence behind them; the real gap is in implementation, where AI-powered agents can identify at-risk patients, prompt appropriate prescriptions, and check in on medication adherence by reducing millions of avoidable cardiac events.
Dr. Stecker emphasizes that clinician involvement, not just advisory oversight, is essential to avoid alert fatigue, documentation overload, and signal-to-noise failures. He states that meaningful AI adoption requires building trust with both healthcare workers and patients, starting with autonomous action today while responsibly advancing toward autonomous clinical decision-making tomorrow. Take a listen.
This guest appearance was facilitated through conversations initiated at ViVE.
About Our Guest

Dr. Eric Stecker is the co-founder and Chief Medical Officer at Insight Health, a cardiologist and professor of medicine at Oregon Health and Science University. He chaired the American College of Cardiology’s Science and Quality Committee, which is responsible for national cardiology practice guidelines and other clinical policy documents. He maintains a practice that focuses on advanced ablation and device implantation. He received a B.S. and M.D. in the Medical Scholars Program from the University of Wisconsin Madison. He received an M.P.H. with a focus on health management and policy from the University of Michigan.
Ritu: Hi everyone. Welcome to our next episode of the Big Unlock Podcast, season seven. We are really excited to have Dr. Stecker with us here today. Dr. Stecker is the Chief Medical Officer and co-founder of Insight Health, where he’s focused on applying artificial intelligence to improve real-world clinical outcomes.
He’s also a practicing cardiologist and professor of medicine at Oregon Health and Science University. His work sits at the intersection of cardiology, data science, and population health, with a particular emphasis on translating predictive insights into actionable interventions. Dr. Stecker has been a leading voice in how AI can move beyond detection to truly impact prevention and care delivery.
Today he’s joining us on the Big Unlock Podcast to explore how intelligent systems are reshaping the future of cardiovascular care. With that, a very warm welcome to all our listeners. Dr. Stecker, thank you so much for joining us today.
Eric: Hi. Thank you. It’s great to be on your podcast. It’s really an honor, and you’ve had some great guests and explored some great topics in the past.
Ritu: Thank you so much, Dr. Stecker. We are really excited to have this conversation because we just came back from HIMSS and AI was everywhere. One of the more interesting things — we went to the Cornell Health Tech Summit and they really talked about wearables and this constant stream of data that’s coming in, and how that’s going to make the annual physical obsolete.
We would love to hear your thoughts on the future of continuous cardiac care with wearables and remote monitoring with AI. Do you envision a future where cardiology becomes a continuously managed condition? And if so, how?
Eric: I think there are some very exciting possibilities, and I’m glad you brought up the idea of continuous health management and continuous monitoring.
That’s an area that’s created great excitement for many people, and I do think it is in our future. It’s important, though, to distinguish that which is an evolving evidence basis. In clinical medicine, putting my academic cardiologist hat on for a moment — there are a variety of interventions and monitoring approaches we can employ, and what we emphasize depends on what is most beneficial for patients. Continuous wearables and continuous physiologic data monitoring has some evidence for benefit, and I think that will evolve significantly over time.
It’s an area where AI can intersect very well and very effectively, once autonomous AI agents are tested and FDA approved. It’s important to recognize, though, that there is a whole set of health interventions we know are very high-impact and beneficial — we just need to work on implementing them. We know what to do; we just need to do it, and AI can really improve care for those things right now.
We don’t need new technology or large clinical trials to show benefit. We have decades of very high-quality clinical evidence. Take statin medications — cholesterol medications for patients at high risk of cardiovascular events or with known heart disease. We have hundreds of thousands of patients studied in clinical trials showing significant benefit, yet identifying those patients, starting them on medication, and supporting them to ensure they take it as prescribed — these are the challenges we face right now. That gap results in hundreds of thousands or even millions of additional heart attacks and cardiovascular deaths. This is something AI can dramatically improve today, without the need for any further study.
Ritu: That’s a great answer — lots to unpack there. You talked about autonomous AI agents, and within Insight Health you’ve had particular success building a suite of these agents. We’d love to hear more about that. I also read a news update that you were recognized by OpenAI for crossing one billion tokens. What’s happening there?
Eric: Yes, thank you. I wish Sara, our CTO and co-founder, were on — he posted something about when we got the plaque for a billion tokens. We use multiple models, by the way, but for OpenAI specifically, by the time the plaque arrived in the mail, we’d already crossed ten times that threshold.
To back up a moment — the idea for our company: we have four excellent co-founders, which is a little unique. Two of us are mid-career physicians who have been practicing clinical medicine for some time and continue to do so. We’ve had to scale back significantly to build this company, but we love clinical medicine and our goal was to remove the roadblocks and allow us to have much more impact for patients in a much less painful way. There are so many speed bumps and potholes in the clinical care delivery system.
The idea came about when my co-founder Dr. Gore — our other Chief Medical Officer — and I went for weekly runs every Saturday morning, rain or shine in the Pacific Northwest. I remember it was early December 2022, dark, rainy, cold, probably 45 degrees. ChatGPT had just burst into public consciousness. Gore said, “This will transform medicine.” We were not in the tech world, not CS folks, so it took us by surprise too. We thought, this will transform medicine — we have to be involved and help guide where it goes for doctors in general and for us specifically. That was the genesis.
We were then connected — his brother is a Silicon Valley entrepreneur with a lot of contacts — and he connected us with Jay Malson and ultimately with Saron Siva. We created the company, and within six or eight months, we had, to our knowledge, the first autonomous interaction with a patient in real clinical practice. This is a specialist practice, which has been our initial focus.
There’s a lot of great technology being developed. Google, for instance, was doing some really great work at that time. But because it was being driven from the tech end rather than the clinical medicine end, they were using patient actors to do scenarios and working on the clinical intelligence layer from that angle. We understood that interacting with patients and intelligently offloading what AI can handle — allowing patients more time to discuss their condition, talk about their symptoms, and ask certain questions within guardrails — we knew that technology could do that, and we knew that was a big part of the benefit for clinicians and patients.
We were confident enough, with our clinical experience and the technological experience of our other two co-founders, that we could make this work. In the first example, we actually had a nurse sitting with the patient in the clinic. About the first week, a nurse was right next to the patient. Then we recognized, “Hey, this is working — let’s release it and let patients do it in their own homes.” That was the genesis.
Since then, we’ve built out a suite of autonomous AI agents that can interact with patients, review referrals, review prior authorizations, summarize the clinical encounter — you’re very familiar with AI scribes — and reach out to patients after the visit. All of these are orchestrated via sophisticated technology, allowing for a much more AI-intensive experience that still feels very comfortable for the patient and for clinicians.
Ritu: That’s really interesting, because we keep hearing that AI actually just augments humans. What has your experience been in reality? Are we over-romanticizing “human in the loop,” and at what point does AI need to take more autonomous decision-making? You mentioned that you eventually removed the nurse because AI was fully capable of handling it on its own. Would love to hear more about your thoughts on that.
Eric: I think you raise a couple of excellent points. I would divide autonomous action from autonomous decision-making — I think those are two separate things. They’re often conflated, and understandably so. The reason I separate them is that once you move into the world of AI making clinical decisions and issuing orders without a human in the loop, it requires a lot of technological work — both on the fundamental technology operations and on the AI safety assurance overlay layer.
It also requires a ton of clinical validation, testing, and oversight. That’s very important. We do not have enough clinicians — not enough specialists or primary care providers — so it’s very important to move in that direction. But it’s very difficult. The other challenge is trust: patient trust and healthcare worker trust among those working alongside autonomous decision-making AI within the healthcare ecosystem.
Most important, of course, are patients — they’re the center, and the reason we deliver healthcare. So autonomous decision-making is important and will come and needs to be developed, but I think we should avoid an excessive focus on it, because there is so much low-hanging fruit to pick right now to improve care through autonomous action.
There’s also appropriate debate about whether AI is really saving clinicians — or are they just having to look through more data generated by the AI scribe? Are they having to double-check for hallucinations? The answer is: absolutely, well-designed and well-implemented AI can act autonomously in a way that really improves both the clinician and patient experience. It has to be done well, but that’s something we can and are doing right now.
There’s so much benefit — and by benefit I don’t just mean healthcare efficiency and its positive financial ramifications for health systems and payers. I also mean patient outcomes: reducing death and disability from disease. We know what we need to do. We need to diagnose high blood pressure, prescribe appropriate medications, and support patients. Wouldn’t it be wonderful if every time a patient was started on a new medication, a nurse — whether an AI nurse or a real nurse — contacted them and asked: “Did you fill that prescription? How’s it going? Are you having any side effects? Do you have any questions? And when would you like me to check in again?”
If a patient says, “Check in with me in a month” — you check back in a month. Are they still taking the medication? Is it okay? Just that action can dramatically reduce mortality among middle-aged and elderly people in the United States, simply by diagnosing high blood pressure, suggesting the correct medication for the clinician to start, and then checking in with the patient. This is not a “sending rocket ships to Mars” kind of thing. We can do this right now — and in fact, we do.
Ritu: That’s a very important distinction you’ve made between autonomous action and autonomous decision-making — something really worth thinking about. Thank you for clarifying that. But that leads directly into the next question: you made a correct point about ambient AI and other tools generating more data and cognitive load for clinicians. AI can generate a flood of alerts, risk scores, and predictions — more and more information. What are your thoughts on the signal-to-noise problem? How do you ensure AI is surfacing the right interventions without overwhelming clinicians or patients? At what point do you decide how much is enough?
Eric: That’s absolutely right. Using that example — if you’re checking in with patients weekly on an oncology issue and you generate a three-quarters-of-a-page summary every time and push it into the EHR, somebody then has to look at it and decide what to do.
This is exactly why having clinicians involved in technology development and implementation is critical. Dr. Gore and I are mid-career — we’ve been attending physicians for 15 to 17 years. We have a lot of experience in clinical medicine, and we still love it. We’re not looking for an exit; we’re making our ecosystem better.
Involving clinicians in a meaningful way — not just as Chief Medical Officers with a couple of consulting meetings here and there, but actually integrating them into product development and implementation — that will be essential. Because exactly what you raised will be highlighted immediately. “Wait, you’re checking in with this patient twice a week — how do we manage that information? Only escalate medical flags. We need a protocol to distinguish routine symptom responses from things that require documentation. Maybe it’s a floating dashboard.” Involving experienced clinicians is the key. If you leave this only to people who have an MD but never practiced, or only to technology developers, the issues you highlight will be a major problem.
I don’t want to whitewash this — the technology advances so fast that sometimes we need to catch up. Having AI scribe documentation that’s two pages long may suit some clinicians, but it creates a huge cognitive load for many others. You really need abbreviated summaries that highlight key things.
You also raised alert fatigue, which is extremely well-documented and a serious problem. I’m old enough to remember paper charts in medical school. When EHRs first came out, everyone was splashing alerts everywhere, and people just clicked “Okay” to get through them. That issue is better understood now, but we may be entering a new era with AI where we need to relearn that lesson. I hope not — and the more experienced clinicians are involved, the quicker we’ll learn it.
Ritu: It’s really interesting to hear that you’re mid-career and still love medicine. That reminded me — we usually start the podcast with an origin story about how you got into healthcare, and we didn’t do that this time. We’d love to hear how you got into this, how you chose cardiology, and how you got interested in the intersection of technology and healthcare. Doctors really bring a unique perspective because you’re there every day and you know what needs to work.
Eric: My father is an engineer and my mother is a social worker, and I am an amalgamation of those two ways of thinking — which is a great fit for medicine: thoroughly analytic, but also with the interactive and social insight that social work requires.
The reason I got into cardiology is that, honestly, memorization is not a strength of mine. To get into medical school you generally need to be an excellent memorizer, and I’m about average for a smart person — which, compared to the average medical student, makes me a bad memorizer. I was somewhat dismayed in the first couple years of medical school, memorizing lists and being graded on how many items out of ten or twelve you could retain. I did fine, but it was painful.
What I loved was physiology — how systems work together. I would understand it quickly, remember it well, and integrate it into practice. In my third and fourth years, I recognized that cardiology is rich with physiology, and that you can have a major impact on patients’ health and longevity. The interesting procedures, depending on the specialty, were also a draw. For all those reasons I gravitated to cardiology and ultimately to electrophysiology.
As for technology — my father worked at an engineering company, and when Oracle first became available to consumers, we got it: floppy disks and a shelf of manuals two feet long. I learned Oracle, worked at my dad’s company, and created SQL databases. That gave me a taste for technology, a sense of it. I never pursued it further — no programming, no CS degree — but it kept me interested and involved. And ultimately, that’s how Dr. Gore and I thought of the idea and then contacted our other two co-founders to start the company.
Ritu: Very interesting. Thank you for sharing that, Dr. Stecker. Do you think the role of the cardiologist is going to get redefined in an AI world as AI takes on more diagnostic and predictive tasks? How do you think the field needs to evolve to keep up with this wave of technology that seems to be threatening to overwhelm many specialties and healthcare in general?
Eric: I sure hope it will absolutely happen. My hope is that clinicians and patients do not need to evolve too much — that the main issue is establishing comfort and warranted trust with the implementation of technology, but that the technology can fit around the current experience and improve it dramatically, unlike EHRs of 15 years ago — or frankly right now.
An example of that is our autonomous AI agents that reach out to patients before visits to gather their basic medical history, understand a patient’s pain syndrome in detail, or for a cardiology patient, understand what procedures they’ve had done and where. All of this can happen at 2:00 AM if the patient is a shift worker — in their own home, on their own time. It doesn’t have to happen in the waiting room. All the questions the doctor or nurse might ask at the beginning of the visit can be handled in the comfort of the patient’s own home. That’s an example of how technology can work more effectively and can work around the needs of the patient.
Ritu: Meeting the patient where they are rather than having the patient come to you. When you talk about these autonomous agents, are you specifically talking about voice agents or how are they operating?
Eric: There are voice agents — you call up and talk with a very natural-sounding AI agent by voice only. There are also text-based interactions. Our company has technology we call “visual voice,” where it’s like texting on your phone as you’re speaking — and as the AI communicates with you, it can also pull in videos to that stream. For instance, instructional videos. An example: as an electrophysiologist dealing with arrhythmia, I frequently send out rhythm monitors. A company can send one straight to a patient’s home and they can put it on themselves. It has instructions, but if they don’t know how, our visual voice can show them a video right there — “Put it here, do this, click that.” You don’t have to look it up on the internet or call an 800 number for help.
There are many different interactive modalities. Again, this fits with the theme of making this as accessible as possible to patients, because that’s going to promote their engagement with their health and doing what we know will promote longevity.
Ritu: With all the implementations you’ve seen so far at Insight Health, have you seen particular success in any specific category, or do you think agents are generally successful across the board?
Eric: We have seen a lot of success. It very much depends on the context — whether it’s a mid-size clinic, an insurer, a smaller or large health system. The needs will be very different for each. That’s the strength of the deep tech stack we’ve developed; we can fit any kind of need.
One example, in keeping with public health — I have a master’s in public health as well — there is a well-established set of preventive activities: colorectal cancer screening through colonoscopies or stool-based or blood-based testing, breast cancer screening, cholesterol checks, blood sugar checks. These are very well established as beneficial, but it’s very challenging for payers and insurers to transmit that down into health systems, clinics, and to patients — even when they’re highly motivated and have bonuses aligned to good care through Medicare plans.
Payers have limited ways of intervening on those gaps. Say 50% of patients aren’t getting colorectal cancer screening, and death from colon cancer is much higher in their panel than it should be. Our technology can screen, reach out to the patient, assess their interest in colorectal cancer screening, educate them about it, assess their preference — colonoscopy, stool-based home test, or blood-based test — and then actually arrange it. It can schedule the colonoscopy, arrange prep, and screen whether they need to see a gastroenterologist first or can go straight to colonoscopy.
For me, the most impactful work is what touches public health and population health. But I recognize that every clinic and every clinician has different pain points, and we can insert ourselves into any of them.
Ritu: Time’s flown by and we’re almost at the end of the podcast — it’s been a great discussion. Thank you so much, Dr. Stecker. Any last thoughts or closing advice you’d like to share with our listeners before we wrap?
Eric: I know you’ve got a sophisticated audience, and I think it’s important for them to realize that as we progress along the spectrum from autonomous action to autonomous decision-making, it will be critically important to engage the workers within healthcare organizations to ensure it’s implemented well and that trust is established. And, of course, ultimately the patients.
The further upstream we are from patient interaction, the less critical that trust-building is. If it’s point solutions working on the back end of the healthcare ecosystem, that’s relatively straightforward. If it’s patient-facing, it’s really important to work with experienced people who can do it well. And as we progress toward the world of autonomous decision-making, wearables, and constant care delivery, we’ll really need to work through what that means as a society and build acceptance before it can gain traction.
Ritu: It might be happening sooner than we realize — the COVID era of AI. Thank you so much, Dr. Stecker. It’s been a pleasure having you on our podcast.
Eric: Thank you. It’s been a great time.
About the Host
Rohit Mahajan is an entrepreneur and a leader in the information technology and software industry. His focus lies in the field of artificial intelligence and digital transformation. He has also written a book on Quantum Care, A Deep Dive into AI for Health Delivery and Research that has been published and has been trending #1 in several categories on Amazon.
Rohit is skilled in business and IT strategy, M&A, Sales & Marketing and Global Delivery. He holds a bachelor’s degree in Electronics and Communications Engineering, is a Wharton School Fellow and a graduate from the Harvard Business School.
Rohit is the CEO of Damo, Managing Partner and CEO of BigRio, the President at Citadel Discovery, Advisor at CarTwin, Managing Partner at C2R Tech, and Founder at BetterLungs. He has previously also worked with IBM and Wipro. He completed his executive education programs in AI in Business and Healthcare from MIT Sloan, MIT CSAIL and Harvard School of Public Health. He has completed the Global Healthcare Leaders Program from Harvard Medical School.
Ritu M. Uberoy has over twenty-five years of experience in the software and information technology industry in the United States and in India. She established Saviance Technologies in India and has been involved in the delivery of several successful software projects and products to clients in various industry segments.
Ritu completed AI for Health Care: Concepts and Applications from the Harvard T.H. Chan School of Public Health and Applied Generative AI for Digital Transformation from MIT Professional Education. She has successfully taught Gen AI concepts in a classroom setting in Houston and in workshop settings to C-Suite leaders in Boston and Cleveland. She attended HIMSS in March 2024 at Orlando and the Imagination in Action AI Summit at MIT in April 2024. She is also responsible for the GenAI Center of Excellence at BigRio and DigiMTM Digital Maturity Model and Assessment at Damo.
Ritu earned her Bachelor’s degree in Computer Science from Delhi Institute of Technology (now NSIT) and a Master’s degree in Computer Science from Santa Clara University in California. She has participated in the Fellow’s program at The Wharton School, University of Pennsylvania.
About the Legend
Paddy was the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy was also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He was the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He was widely published and had a by-lined column in CIO Magazine and other respected industry publications.






















































