Season 6: Episode #165
Podcast with Alvin Liu, M.D., Inaugural Director of AI Innovation Center, Johns Hopkins Medicine
Share
In this episode, Dr. T.Y. Alvin Liu, Inaugural Director, James P Gills Jr MD and Heather Gills AI Innovation Center at Johns Hopkins Medicine shares his journey in healthcare AI, with a focus on image analysis and real-world applications.
Dr. Liu discusses the FDA-approved autonomous AI system for diabetic retinopathy screening, which enables early detection in primary care settings and improves screening adherence. He outlines successful AI implementations at Johns Hopkins Medicine, including prior authorization pilots using generative AI and the importance of operational understanding in deployment. He also discussed the intersection of value-based medicine and artificial intelligence, and the challenges of implementing successful AI programs.
At the enterprise level, Dr. Liu emphasizes the need for strong AI governance to assess safety, effectiveness, and ROI. He outlines key challenges for AI startups, especially around reimbursement and regulation, and urges them to pursue sustainable business models. He also suggests closer collaboration among startups, VCs, and integrated health systems to bridge the gap between innovation and real-world adoption, essential for scaling AI responsibly and delivering long-term value in healthcare. Take a listen.
Show Notes |
||||
01:14 | What interests you in the healthcare industry segment to become the CIO of a hospital system? | |||
02:47 | How long have you been in the leadership position at UMC, where is it located, and what kind of population does it serve? | |||
03:35 | You have done a lot of work from technology perspective to support the business needs of the hospital. You've done over 200 applications and transformed the EMR system. Would you like to share with the audience the thought process that drove those changes and what were some of those changes? | |||
07:47 | What do you think about your digital transformation efforts? If you could describe a few of them which have had impact on the patient population. | |||
08:30 | Please describe in your own, you know, way that what is digital transformation for provider systems such as yours? Where do you see it going? Some of the challenges that you might have faced and how did it actually end up impacting patients? | |||
11:24 | How did you manage to change the mindset of the people? How did they manage to change themselves? To adapt to this new world where technology, especially with AI and GenAI and other new technologies which are coming our way, how do you change mindsets and change behaviors and change culture over there? | |||
13:00 | Would you like to provide one example of how the technologies which you were implementing, and you continue to be implementing in your hospital system are accessible and usable by a variety of users, including within the hospital and outside the hospital. | |||
16:28 | How do you innovate? Do you involve external parties? Do you have some kind of a, you know, innovation focus department? Or is it part and parcel of everybody's, you know, kind of like daily life? | |||
19:24 | What are your thoughts on new technologies, especially Gen AI? Have you been experimenting with any predictive analytics or large language models? What would be your advice or thoughts to any other healthcare leaders on how to go about this journey of exploration? | |||
22:15 | Standing here now and looking back, if you were able to go back and change one or two things, what would you like to do differently or have done differently? | |||
Video Podcast and Extracts
About Our Guest
Dr. T. Y. Alvin Liu, the James P. Gills Jr. M.D. and Heather Gills Rising Professor of Artificial Intelligence in Ophthalmology, was born and raised in Hong Kong. He subsequently attended Phillips Exeter Academy, Cornell University (B.A.) and Columbia University (M.D.). He completed his ophthalmology residency and vitreoretinal fellowship training at the Wilmer Eye Institute at Johns Hopkins University (JHU), and was named an “Emerging Vision Scientist” by the National Alliance for Eye and Vision Research in 2020. Currently, he holds dual faculty appointments at the JHU School of Medicine and School of Engineering. He is also the Inaugural Director of the James P. Gills Jr. M.D. and Heather Gills Artificial Intelligence Innovation Center, which is the first dedicated endowed ($10 million) AI center at the JHU School of Medicine.
As an interdisciplinary strategist at the intersection of venture capital, startup companies and health systems, he specializes in the implementation and scaling of healthcare artificial intelligence (AI) technologies in both clinical and operational domains, for example autonomous AI for diabetic retinopathy screening and generative AI for revenue cycle management. He has operational experience in various processes that are critical for AI deployment, including incentive alignment of stakeholders, IT integration, workflow design, key performance indicator establishment, and change management.
In addition to being an advisor/Medical Director for startup companies and a venture partner at a healthcare-focused investment fund, he has also completed executive education coursework at Wharton (venture capital), Harvard (digital transformation in healthcare), and Johns Hopkins (value-based healthcare).
In terms of AI governance, he holds leadership positions on a health system and national level. At Johns Hopkins Medicine, he is a co-chair of the AI and Data Trust Council, a leadership team that oversees all AI initiates across the entire health system in the imaging, clinical and operational domains. On a national level, he is a member of the American Academy of Ophthalmology AI Committee, and represents ophthalmology at the American Medical Association AI Specialty Society Collaborative Meeting.
Recent Episodes
Q. Hi Alvin, welcome to The Big Unlock Podcast. It’s a pleasure to have you on board. As you might be aware, this was started by my colleague Paddy Padmanabhan from Damo Consulting, and we’re building upon what he left us as his legacy. I’m Rohit Mahajan, Managing Partner and CEO at BigRio and Damo Consulting, and I’m also the host of The Big Unlock Podcast.
Alvin: Rohit, thank you so much for having me on this podcast. I’m excited about the interesting topics we’ll be chatting about today.
So yes, happy to give you an introduction and some sense of where I came from and what I’m interested in.
My name is Alvin Liu. I was born and raised in Hong Kong. I came to the U.S. as a teenager to attend a boarding school in New Hampshire. After that, I did most of my schooling on the East Coast. I’m a practicing retinal surgeon. I did my ophthalmology residency and retinal fellowship at Johns Hopkins Medicine, and I stayed on as faculty.
I actively practice and take care of patients with a variety of retinal problems. Outside of my clinical work at Hopkins, I’m focused on artificial intelligence in several areas.
Within Johns Hopkins Medicine, I wear several hats. First, I’m the inaugural Director of the Gills AI Center at the Wilmer Eye Institute—this is the first endowed AI center at the Johns Hopkins School of Medicine, made possible by a generous $10 million donation by Dr. Gills.
Second, I’m a clinician-scientist involved in the development of clinical AI tools.
Third, in recent years, my focus has been on the implementation of AI tools for both clinical and operational purposes at the health system level. I’m sure we’ll dive into specific examples later today.
And fourth, I’m involved in AI governance. As you can imagine, there are many developments in AI in healthcare. In response, Johns Hopkins Medicine recently established a leadership team to oversee AI efforts across the entire health system, and I’m part of that team. I’ll be happy to talk about the AI governance work we’re doing at Johns Hopkins.
Q. That’s amazing, Alvin. I wonder—with so many responsibilities, how do you even find time? Do you sleep at all?
Alvin: I do sleep and try to get seven to eight hours sleep every day. I think that’s extremely important because, I myself cannot think very well if I don’t get enough sleep. So, I do put a premium on the amounts of sleep that I end up getting.
Q. That’s amazing. So tell us, Alvin—you studied here on the East Coast and you’re a practicing physician. What attracted you to technology, especially emerging technologies, and when did you get involved with it? Also, talk to us about some of the work you’ve done in this area.
And even before that, if you’d like to talk about the health system itself, the geography, and the kind of patient population it serves, feel free to do that as well.
Alvin: Sure, I can start by talking about how I got involved in AI.
Near the end of my clinical training, around 2017–2018, I first got started with artificial intelligence. That was when a specific kind of AI technique called deep learning really started gaining traction.
Deep learning is the underlying architecture that powers much of what we know as AI today in 2025. It’s especially good at two things: image or video analysis, and more recently, natural language processing through large language models.
Back in 2019, most deep learning applications in healthcare focused on image analysis. As a retina specialist, I’ve always worked closely with images. If you look across medical specialties, radiology and ophthalmology are the most image-intensive, both in research and clinical care.
That’s why, when you look at AI research and real-world implementation today, the two medical fields leading the way—in the U.S. and globally—are radiology and ophthalmology.
What really got me interested in deep learning’s application to ophthalmology, and to medicine more broadly, was a study published by Google a few years ago. They showed you could train an AI model to predict someone’s age, sex, blood pressure, and smoking status just by looking at a retinal photograph.
That’s a superhuman capability—no doctor can do that. That one paper convinced me that AI would change medicine and society as we know it. And that’s something I want to dedicate the rest of my life to.
Q. That’s amazing. So, tell us a little more, Alvin, about Johns Hopkins as an organization and the kind of patient population you serve. And then we can dive into some of the use cases you’re seeing or currently working on.
Alvin: I’ll start by giving a sense of what Johns Hopkins Medicine is about, and then we can dive into specific examples.
Johns Hopkins Medicine is headquartered in Baltimore, Maryland. As an integrated health system, we operate six hospitals and around 50 outpatient sites. We serve a wide range of patients, most of whom are urban residents. Over the past several years, we’ve been working on a variety of AI initiatives. I’ll give you two specific examples.
The first is a clinical one—the deployment of autonomous AI for diabetic retinopathy screening, which we started in 2020. This is a significant application. When this technology was first approved by the FDA in 2018, it was the first-ever fully autonomous AI system in any medical field to get FDA approval. So my field, retina, actually made history. A recent study published in the New England Journal of Medicine AI showed that this technology is now the second most widely used clinical AI tool in the U.S. I think it’s a great gateway example to explain the broader medical AI ecosystem.
The idea is simple: everyone with diabetes should get an eye exam once a year. Diabetic retinopathy is the leading cause of blindness in the working-age population globally, and it’s expected to worsen with rising diabetes rates. It’s also well studied—we know that annual screenings, early detection, and timely treatment are effective and cost-efficient in preventing blindness. However, the challenge is that even in the U.S., only about 50% of patients with diabetes undergo these recommended screenings each year. The rate is even lower in many other countries.
Autonomous AI changes that. Traditionally, a primary care doctor would prescribe medication and manage diabetes, but eye screening required a separate appointment with an eye specialist, which creates friction. With autonomous AI, screening can now happen right in the primary care office. Imagine going in for a routine visit—your vitals are checked, medications refilled, and now, photos of your retina are taken. These images are analyzed in real time by an AI model in the cloud. Within a minute, the AI autonomously determines whether or not you have diabetic retinopathy.
If the answer is yes, you’re referred to an ophthalmologist. If no, you’re done with your screening for the year. We started using this at Johns Hopkins in 2020 and reviewed the data to evaluate its impact. The result? Yes, it worked. We saw improved adherence to the annual screening guidelines.
When we looked closer, the greatest improvements were seen among historically underserved groups—African Americans and Medicaid patients. The positive impact was outsized for these communities, and we published our findings in Nature Digital Medicine about a year ago. The second example is operational—using generative AI for revenue cycle management.
For those unfamiliar, revenue cycle management is how health systems like Johns Hopkins get reimbursed for the care we provide. It’s complex and involves many steps and a lot of paperwork. Traditionally, automation efforts have relied on older machine learning approaches like robotic process automation (RPA), which require a lot of rule writing and don’t handle exceptions well. This is where generative AI, particularly large language models, shine. They are adaptive, understand text and unstructured data, and can handle edge cases much better.
We’ve used GenAI specifically for prior authorization. It has significantly reduced the time needed to complete and submit each case, making the process more efficient overall. So, these are two real-life examples—one clinical and one operational—where we’re currently using AI at Johns Hopkins Medicine.
Q. That’s very interesting. So, I have just some curious questions. Alvin, on the first example I. That you talked about in the primary care physician setting, that a patient can go and get their eyes checked. So, does it need specialized equipment at this time, do you think? At some point in time, it may be that I can just use my iPhone camera and or look into some kind of a kiosk. And, you know, kind of get it done at the airport or, you know, I always look into this when I do the security clearance.
Alvin: That’s a great question. You’re touching on a really important point—the nuts and bolts of implementation. Implementation is key when it comes to scaling any kind of technology, including AI.
The short answer is yes, it does require some specialized equipment, but these are very common. In short, you need a way to take a picture of the back of the eye, which we call a fundus camera. These are already widely used by ophthalmologists, and there are many different brands and models. So, if you step back, there’s already an existing supply chain and industrial process in place for producing these cameras.
Now, the traditional cameras are desktop-based. They’re not very portable—they’re a bit heavy, and you can’t easily carry them yourself. But their footprint is relatively small—about two feet by two feet—and they can sit on a mobile table. So they’re easily accessible, and the image quality is quite good.
Of course, there’s been work on developing more portable cameras, and many of those already exist. You can even use an adapter with a smartphone to capture retinal images. So the technology is there.
However, in real-world settings, most of the AI models for diabetic retinopathy—especially the ones used in clinical deployment—are designed for use with the more common desktop-based fundus cameras. While they’re larger, they typically deliver better image quality, which is why they’re still preferred.
Q. And then a curious question on the prior auth side—are you implementing and experimenting with prior auth across the board, or is it for a certain set of disease conditions, CPT codes? And then, is that a software that the team has developed, or is it something you’re using from the outside in?
Alvin: That’s a great question. So, what you’re getting at is the nuances between the different service lines—who would benefit from prior authorization or not.
Broadly speaking, there are certain fields that require a lot of prior authorization, and that’s how insurance payers do utilization management. And I’m painting with very broad strokes here.
Typically, the service lines or medical specialties that require prior auth tend to give out more expensive treatments—things like infusion medications in oncology or dermatology, or in our case, retina. We do a lot of injections into the eye—what we call intravitreal injections—for diabetes and age-related macular degeneration. These are examples where, because the treatments are expensive, they’re more likely to require prior authorization.
So when we did our pilot at Hopkins, we focused more on those specialties that require a lot of prior auths, versus ones where the care typically just goes straight through without it.
But that’s a great question, and you’re absolutely right—the devil is in the details. Even for a relatively specific step in revenue cycle management like prior auth, designing a pilot that makes sense, that demonstrates ROI, and establishes relevant KPIs—requires a very deep understanding of how medicine works and operates. And not in a vacuum.
Q. So shifting gears a bit, Alvin – with the macroeconomic factors now impacting the whole ecosystem, including digital health (which is a very large part of the U.S. economy, as we all know) – what are some of the things that you feel are coming in the near future?
Alvin: I’ll answer your question from two opposite ends of the spectrum. First, from the startup angle—because in my role at Hopkins, I end up interacting a lot with startup companies in the AI space. And then I’ll speak from the enterprise perspective.
So on the startup side, I think one of the common mistakes startups make in the healthcare AI space is not considering—or not understanding—the reimbursement issue from day one. And I think that’s the most important thing.
One could argue that healthcare AI is still a very new field, so the payment mechanisms in the market aren’t yet mature enough to handle an influx of new products. It’s a tough situation, honestly, for healthcare AI startups. If you’re on a founding team that doesn’t have a deep understanding of how medicine works, you probably don’t know what a CPT code is, or how that’s how services get paid for. If you want to get a CPT code, very likely—especially if you’re in the AI and medical device space—you fall under the FDA’s purview.
And if you want FDA approval, we’re talking about $3 to $5 million off the bat. One mistake I see is startups being hyper-focused on building the product—both in terms of execution and how they spend their funding—without accounting for or budgeting for that FDA process. And even if you’re lucky enough to get FDA clearance, then you have to think: are there existing CPT codes that will reimburse you for the AI service? Very often, there are not. So then you have to go to the AMA to negotiate a new applicable CPT code.
That process takes a long time. And even if you succeed in getting a new CPT code, there’s no guarantee the payers will reimburse you. And even if they do, the rate might not be financially sustainable.
So from the startup side, you really have to think long and hard about your reimbursement pathway. Of course, there are other ways to get paid—not just through CPT codes—but that requires a deep understanding of healthcare business models. And in some cases, you may need to invent a new one.
Now, on the enterprise side: AI is here to stay. But for health systems, it’s chaotic. We—as an integrated health system—get many, many sales calls from AI companies every day. It’s a crowded, noisy space. That’s why having a robust AI governance structure that looks at multiple aspects—clinical, operational, ethical, financial—is absolutely necessary. And I think Johns Hopkins Medicine is one of the first major integrated health systems to give this serious thought.
It’s still evolving. We’re learning. But building a thoughtful and industry-friendly governance system is critical. And if you zoom out even more—on a very macro level—the billion-dollar question is: how will value-based care and AI come together? These are two very big trends that will intersect soon. What that intersection looks like is going to be very interesting.
Q. That’s very good insight, Alvin. So could you talk to us about any digital health programs that have been implemented and that you’ve been involved with, which improve access to care—or any other examples you’d like to share from the digital health side?
Alvin: The example I would give is the autonomous AI for diabetic retinopathy screening program. Yeah, that’s a good example. We already talked a little bit about it. What we learned is that, again, like 80% of a successful program is all about implementation and how you execute things.
So, for example, even if you have a successful screening program at the level of primary care, you still have to figure out how to get the patients who screen positive to ophthalmologists. That’s a different line of work.
You can extend this analogy to other areas as well—for example, in omics. Just to set the stage, omics is a relatively new field that connects biomarkers found in the eye—mostly retinal biomarkers—with systemic health conditions. I’ll give you a couple of examples. Right now, we can already use retinal images paired with AI to predict someone’s future cardiovascular risk, risk of kidney damage, or even dementia.
So, I think diabetic retinopathy is just an early example. We’re going to see an explosion in the adoption of omics. But the question is: even if you have an AI-based omics screening program in a community or primary care setting, and you identify patients at risk for various systemic conditions like Alzheimer’s or cardiovascular disease—what do you do next?
How do you set up a workflow to get these people to the subspecialists they need to see downstream? That’s still in the works. It’s very fluid. But I think that kind of thinking—being able to implement and execute things efficiently at scale—is going to determine the success of a lot of AI programs, especially when it comes to AI in omics.
Q. Yes, that’s amazing. So, Alvin, we talked about governance a bit. How do you structure prioritization and funding, and what kind of operating models do you look at?
Alvin: Sure. I’m happy to talk about that. I’ll take a step back and give you a brief background on how this all came about.
Back in 2024, the executive leadership at Johns Hopkins Medicine started a task force to develop an implementation strategy that ensures Hopkins becomes a global leader in the responsible use of AI.
One of the key tenets the task force identified was that they wanted this to be a clinically led responsible AI program—meaning physicians like myself would and should play a major role.
The task force then identified seven core principles that are critical for responsible AI: fairness, transparency, accountability, ethical data use, safety, evidence-based effectiveness, and so on.
From these, we identified several implementation plans. A key one was to establish a governance process and framework that would integrate with existing governance structures. As a result, an AI oversight team was created. It’s an eight-person leadership team drawn from across the health system. I’m one of the eight, and we have purview over all things AI-related across clinical, imaging, and operational domains.
In a nutshell, what we’ve developed is a standardized framework for how AI vendors should interact with Johns Hopkins Medicine. So, for example, if you have a clinical AI product and want to engage with us, there’s a standardized intake process. First, you need to find an internal partner at Johns Hopkins who will advocate for you.
We then have standardized questionnaires—what is the tool used for? Do you have data cards? Model cards? What’s the expected ROI? How do you demonstrate it’s safe? And so on.
Based on the nature of the tool—whether it’s clinical, operational, or imaging—the application gets routed to different sub-teams. Then there’s an internal review committee that dives deep into the responses. We grade them, bring them back to the committee, debate, and often go back to the vendors with follow-up questions.
Ultimately, the committee does an up-or-down vote based on a variety of criteria and decides whether the tool can be implemented at scale across the enterprise—or not, and why.
Q. That’s very robust process Alvin. Thank you for sharing such good examples, thoughts and advice so far. Any final closing comments? I think we are coming to our end of our conversation. Any other things that you would like to bring up? Any announcements, news items? Or anything else that you would like to share that’s upcoming on your horizon?
Alvin: What I’d say is that, at a high level, most people agree that AI is going to change medicine—and society—as we know it. The train has left the station. It’s no longer a question of whether we’ll adopt AI, but what the future will actually look like.
When it comes to healthcare specifically, it’s one of the most heavily regulated industries—and also one of the most personal. At the end of the day, we’re in the business of taking care of people and reducing suffering, and there’s a deeply human, emotional component to that.
I do believe that, for the good of humanity, we need much more collaboration in this space. And in particular, I see venture capital and startups as major engines of innovation.
What’s been missing—but is starting to improve—is a strong connection between the VC/startup world and integrated health systems. I think that relationship needs to get better. In the U.S., integrated health systems deliver the majority of care. So whether startups like it or not, their products will ultimately have to go through these enterprises.
That said, health systems don’t move as quickly as the tech industry. And that’s understandable—but I also think there’s room for improvement, particularly in how quickly decisions are made. Technology is evolving at an exponential rate, and AI is no exception. Things move fast—and for good reason.
So, there’s work to be done on both sides. I’m hopeful that we’ll see much stronger and closer collaboration between startups and health systems in the near future. If that happens, I think a lot of good will come out of it.
Subscribe to our podcast series at www.thebigunlock.com and write us at info@thebigunlock.com
Disclaimer: This Q&A has been derived from the podcast transcript and has been edited for readability and clarity.
About the host
Paddy is the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy is also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He is the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He is widely published and has a by-lined column in CIO Magazine and other respected industry publications.
Rohit Mahajan is an entrepreneur and a leader in the information technology and software industry. His focus lies in the field of artificial intelligence and digital transformation. He has also written a book on Quantum Care, A Deep Dive into AI for Health Delivery and Research that has been published and has been trending #1 in several categories on Amazon.
Rohit is skilled in business and IT strategy, M&A, Sales & Marketing and Global Delivery. He holds a bachelor’s degree in Electronics and Communications Engineering, is a Wharton School Fellow and a graduate from the Harvard Business School.
Rohit is the CEO of Damo, Managing Partner and CEO of BigRio, the President at Citadel Discovery, Advisor at CarTwin, Managing Partner at C2R Tech, and Founder at BetterLungs. He has previously also worked with IBM and Wipro. He completed his executive education programs in AI in Business and Healthcare from MIT Sloan, MIT CSAIL and Harvard School of Public Health. He has completed the Global Healthcare Leaders Program from Harvard Medical School.
Paddy was the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy was also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He was the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He was widely published and had a by-lined column in CIO Magazine and other respected industry publications.
Never miss an episode
The only healthcare digital transformation podcast you need to subscribe to stay updated.
The Big Unlock Podcast is hosted by Damo Consulting Inc. For information, visit: www.damoconsulting.net Terms of Use | Privacy Policy
© 2025 The Big Unlock Podcast. All Rights Reserved.
Stay informed on the latest in digital health innovation and digital transformation