Season 6: Episode #158

Podcast with Keith Morse, MD, MBA, Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics - Enterprise AI, Stanford Medicine Children’s Health

The Right AI Use Case Starts with Knowing Your Data and Your Workflows

To receive regular updates 

In this episode, Keith Morse, MD, MBA Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI of Stanford Children’s Health, shares how the organization is embracing tools like large language models (LLMs), machine learning (ML), generative AI to improve patient experience, clinical workflows, and provider satisfaction.

Dr. Morse discusses the role of ambient listening and EHR-integrated tools in reducing documentation burden and enhancing patient care. He emphasizes the importance of upskilling staff through experiential learning and prompt engineering workshops to ensure meaningful adoption of LLMs. Dr. Morse also shares how Stanford Children’s is deploying LLMs through initiatives like their PHI-compliant chatbot, “Ask Digi.”

Dr. Morse also talks about the PEDSnet, a pediatric EHR data-sharing consortium enabling large-scale research and potential LLM applications for unstructured data. He also highlights the promise of Agentic AI, virtual nursing, and ambient tech—stressing that effective AI adoption begins with a deep understanding of your data and workflows, cross-functional collaboration, and workforce readiness in navigating AI’s evolving role in healthcare. Take a listen.

Show Notes

01:14What interests you in the healthcare industry segment to become the CIO of a hospital system?
02:47How long have you been in the leadership position at UMC, where is it located, and what kind of population does it serve?
03:35You have done a lot of work from technology perspective to support the business needs of the hospital. You've done over 200 applications and transformed the EMR system. Would you like to share with the audience the thought process that drove those changes and what were some of those changes?
07:47What do you think about your digital transformation efforts? If you could describe a few of them which have had impact on the patient population.
08:30Please describe in your own, you know, way that what is digital transformation for provider systems such as yours? Where do you see it going? Some of the challenges that you might have faced and how did it actually end up impacting patients?
11:24 How did you manage to change the mindset of the people? How did they manage to change themselves? To adapt to this new world where technology, especially with AI and GenAI and other new technologies which are coming our way, how do you change mindsets and change behaviors and change culture over there?
13:00Would you like to provide one example of how the technologies which you were implementing, and you continue to be implementing in your hospital system are accessible and usable by a variety of users, including within the hospital and outside the hospital.
16:28How do you innovate? Do you involve external parties? Do you have some kind of a, you know, innovation focus department? Or is it part and parcel of everybody's, you know, kind of like daily life?
19:24What are your thoughts on new technologies, especially Gen AI? Have you been experimenting with any predictive analytics or large language models? What would be your advice or thoughts to any other healthcare leaders on how to go about this journey of exploration?
22:15Standing here now and looking back, if you were able to go back and change one or two things, what would you like to do differently or have done differently?

Video Podcast and Extracts

About Our Guest

Keith Morse MD, MBA, is a pediatric hospitalist and Medical Director of Clinical Informatics - Enterprise AI at Stanford Medicine. His work in operational and research informatics focuses on meaningful deployment of machine learning in clinical settings. He serves as Stanford's co-site PI for participation in PEDSnet, an 11-site pediatric research consortium. His academic roles include Program Director for Stanford's Clinical Informatics fellowship.


Q: Hi Keith. I’m Rohit Mahajan, CEO and Managing Partner of Damo Consulting and BigRio, and also the host of the Big Unlock podcast. Welcome to the Big Unlock podcast. I’d love for you to introduce yourself, Keith—share your background, your role, and what motivates you on a daily basis.  

Keith: That sounds great. I’m Keith Morse. I am a pediatric hospitalist, which means I’m a physician who takes care of children admitted to Lucille Packard, Stanford Medicine’s Children’s Hospital. I’m a clinical informaticist, focused on studying and optimizing the use of technology and data systems for care delivery within a health system, specifically within informatics. My role is Medical Director for Enterprise AI, where I lead the team deploying and evaluating AI and large language models within care operations at the children’s hospital. I’m also an educator. I serve as the Program Director for our Clinical Informatics Fellowship Program, a two-year training program for physicians who have completed their specialty training and want to gain expertise in clinical informatics. 

The program prepares them for roles as CMIOs, Chief AI Officers, or positions in industry. Finally, I’m a researcher. I conduct research in AI and also serve as the Co-Site PI for Stanford’s participation in PEDSnet. PEDSnet is an 11-institution EHR data-sharing consortium that supports large-scale investigations to improve pediatric health.

Q: That’s awesome. I was reading about your background, and as we discussed previously, you did your MBA in healthcare administration and then chose to become a physician.
What motivated you in that direction? Do you still practice and see patients, and how does it work within the Stanford Medicine health system?

Keith: Certainly. I studied Econ and business as an undergrad, and where I was, they offered a combined MBA program. You add on an extra year to your undergrad training and you can start to get a sense of MBA-type training courses. I was doing my pre-med courses then as well, and I had a couple of years between finishing the MBA and starting med school. The job that I had was writing SaaS code for a consulting company, analyzing Medicare and Medicaid data for federal and state government agencies. 

That was essentially a data scientist role before the term “data scientist” was commonly used. And I loved it — understanding what data you have, how you can use it to answer questions, how you summarize it, and how you present it back to the requester. Those are such core things. I was in medical school in Philadelphia at Thomas Jefferson and then did a pediatrics residency in Phoenix, Arizona. During both of those times, I got involved in a couple of research and operational projects where I served as the data scientist. Writing R code, but also getting a sense of, instead of just being a data consumer as a data scientist, I was also a data producer. 

When you start delivering care, you are the person standing at the bedside, talking to the patient, making decisions, and also writing a note, ordering labs, and ordering follow-up. All of that is the beginning of the types of data that eventually get billed and trickle into databases that get used. In residency, I started working with our CMIO to analyze the data that was available through our EHR and really started enjoying it. 

Then I joined Stanford for their Clinical Informatics Fellowship and have been on faculty since then. We are fortunate here at Stanford Children’s to have a forward-looking CMIO, Dr. Natalie Pageler, who supported both myself and a small team to start building out our organization’s capabilities in operational AI back in 2020. We have been working at this for the last five years or so. Obviously, when ChatGPT hit, we got a whole lot busier, but it wasn’t the beginning for us. The processes we have in place now are built on that early investment.

Q: That’s pretty early to start in 2020. That’s a solid five years of experience with various kinds of AI implementations. How do you approach the business problem? Do you have specific use cases, and what do you do with them? How do you decide where to put your energy, time, and money?

Keith: I’ll answer this in three parts. First, I’ll talk about our process for identifying use cases, and then give a brief overview of two use cases that are relatively mature. First off, I love talking about use cases. In some ways, it’s a mythical concept. When operational AI folks get together at conferences or meetings, people sit around and whisper, “What are your use cases?” The reason is it’s such an important topic because we are, in essence, asking: In what ways have you found that AI is valuable? Where is the juice actually worth the squeeze? It’s easy to have research papers or proof-of-concept pilot projects showing that AI is theoretically useful. But when you have to make it work for an enterprise indefinitely, it’s a much different problem to solve.

We think about where AI can bring value without it costing $50 million or taking five years to implement. These are the types of considerations we focus on. This is actually a really hard question because three separate areas of expertise must align to arrive at good use cases.

The first is understanding AI technology in isolation. What is a large language model? What is a deep neural net? What is logistic regression? What can reasonably be expected of that technology? 

The second is understanding what infrastructure is available at my organization to support those tools.
It doesn’t help to have a sophisticated AI tool if you don’t have the data available, the compute power, or if the data isn’t available at the needed cadence for the algorithm. Another tricky part is that AI infrastructure is invisible. You can’t walk into a room and see where the AI lives. It exists in the ether. You have to be plugged into the organization’s IT structure to understand your current infrastructure. And it’s always evolving. We’re growing our infrastructure, making investments. It’s different now than it was two years ago, and it will be very different five years from now. 

The third and by far most important — and the hardest — is that workflow expertise does not solve nebulous, non-specific problems. It must solve a specific problem for a specific human, in a specific job, in a specific part of their workflow. We summarize all of that by saying “workflow,” where in a person’s workflow, AI can potentially be useful. The challenge is that healthcare is a diverse, complicated entity. My health system is different from the health system down the street, different from those in Cincinnati or Texas. 

Even within my organization, there are so many different workflows that no one person understands all of them. Our process for learning about these workflows — and this is something my team spends a lot of time doing — involves talking to people within our organization to help them tell us what problems exist and how AI could potentially help.
You would think that’s easy. You might think you can just talk to somebody and figure it out.
It turns out to be surprisingly difficult. The reason is this: if you imagine a standard organizational hierarchy — with director, supervisor, or executive oversight at the top, then managers, and then boots-on-the-ground staff — you find some patterns. This could be in Revenue Cycle, Supply Chain, Sanitation, or any other department. 

Usually, when you talk to the more senior folks, they are very excited about what AI can provide: speed, efficiency, safety, uniformity. They are generally onboard. Being a manager is different than being a boots-on-the-ground employee. Leadership often doesn’t understand what happens on a day-to-day basis in their teams. That’s not a dig on leadership; it’s just a fundamentally different job. People who oversee big teams — it’s too much to know. You can’t look to leadership to tell you where the problems are. Also, leadership is not the group whose jobs will be directly supported by AI. It’s going to be the people working on the ground. You have to talk to the folks whose workflows are potentially going to be impacted by AI to understand exactly what their workflows are and where AI could be helpful. Usually, the way we do this is to start at the top and work our way down. We get buy-in from leadership, then get passed to middle managers. They might suggest three or four areas that are worth exploring. We then meet with each of those teams and ask specifically what it is they do, what data they look at, what problems they encounter, and how AI could potentially support their work. Our hit rate in those meetings is relatively low. Most problems are not solved by sophisticated AI. There could be other, simpler solutions that work just as well. Often, our biggest takeaway is directing teams to existing data and reporting tools that can solve their problems without the need for advanced AI. But through this process, we do identify good use cases, and that informs our future efforts. 

The main takeaway is that no one outside your organization can credibly provide a guaranteed use case because they don’t know your AI infrastructure or your specific workflows. I don’t even know all the specific workflows within my own organization, so how could a third party or a hospital elsewhere know? There is a long-term role for internally identifying use cases, because they aren’t easily transferable from other institutions. One area where this is starting to shift is when AI is embedded within your electronic health record (EHR).

For example, our EHR is Epic. We use some tools they provide, like drafting responses to patient messages. This is a use case gaining traction nationally because Epic controls or manages all three areas I mentioned earlier: First, Epic understands that drafting patient messages is within the capabilities of a large language model. Second, all the infrastructure needed to use this drafting tool exists within Epic — no extra resources are needed. Third, the workflow for responding to patient messages is entirely within Epic, meaning they have a good understanding of the process. If the workflow required multiple steps outside the system — reading something in Epic, going to another machine, speaking to a patient, and coming back — it would introduce complexity and variability. But responding to a patient message is done entirely within Epic, so the workflow remains visible and consistent. That’s why use cases like this are more broadly transferable — because all the necessary components are self-contained.

Q:  Could you please share with the audience your experience with some of the large language models and where you have been successful in implementing them?

Keith: We have a couple of implementations, and we tend to publish most of the work we do, so a lot of it is available in the literature. One of our major use cases is using machine learning to help identify confidential content in teen notes. There are laws in the state of California and in most other states that protect teens seeking care for certain types of sensitive issues like mental health, sexual health, and substance use. In California, there are explicit laws that prohibit providers who care for adolescents from informing the teens’ parents that they are receiving this care. The reason for these laws is that there is strong evidence showing teens are more likely to seek care for sensitive issues if they are assured confidentiality. The challenge is that federal rules also require health systems to make patient records available to teens and their parents.

The 21st Century Cures Act requires that health information be available without undue effort. Basically, this means parents must be able to access and review records, usually through a patient portal within the health system. Anyone who takes care of adolescents faces a challenge: the federal government says you have to share all of the patient’s notes with the patient and their parents, but state laws say you can’t share portions related to three sensitive topics — mental health, sexual health, and substance use. We address this with what we call a “confidential note type,” where providers can document sensitive information. These confidential notes can be excluded from what’s shared with parents, and that system works well. Confidential information in these notes that we are theoretically sharing with the parents and families. This is a great application for large language models because it essentially processes large volumes of text to identify certain definable concepts. This is one of the projects we worked on before large language models came out. We developed a bespoke NLP model to help identify these concepts. We have replicated that analysis using large language models, and the way that we use it now is as a retrospective audit and feedback mechanism. We process the notes of different divisions, and then we can see who in the division is documenting in a way that’s potentially problematic. We can bring that back to the individual and also the leadership and say, “Hey, just FYI, we noticed that your notes contain a lot of information that would run afoul of the California state privacy laws.” 

We can then help providers improve their documentation. Often, what it is there is some sort of smart phrase or automatic pulling in of patient information that’s considered confidential. We just sort of edit or help update somebody’s note template, and the problem goes away . It’s those types of nudges that we think are helpful and really get at what we want, which is to be able to share patient information with the teens and their parents. We want this stuff to get out, we just need to do it in a way that is thoughtful and responsible.

Q: How about any other use cases?

Keith: Hospital operations. We have one that is not pediatric-specific, and it’s around tracking and quality metrics. This is work that we haven’t published quite yet, but health systems, as you well know, are on the hook to report on various quality metrics. One of the quality metrics is around surgical site infections—identifying what percentage of patients have an infection at the site of surgery within 30 or 90 days after the procedure. That’s a marker of surgical quality, infection control, and lots of really important things for health systems. We have teams within our hospital whose job it is to both track these metrics and then identify ways that we can intervene and identify bundles. That we can introduce to help improve these metrics. But there is a core function of tracking and identifying whether a surgical site infection occurred. The way that works is that there is a team of folks who essentially read the patient’s chart after a procedure to identify whether or not a surgical site infection occurred. Reading patient notes about how the patient came back to the ER with what looks like a surgical site infection, where they were prescribed antibiotics by their PCP—those types of things would flag as an identified infection. 

It’s a ton of work to have a human read every chart following every surgery for 30 to 90 days. So, we developed a process by which a large language model is used to review the notes in that relevant time window. We ask the model a relatively straightforward question: “Do you think this note contains evidence of a surgical site infection?” Very simply, that. We provide a few examples to help it understand what are some key indicators. We do that for every single note the patient has in that post-surgical time window and come up with a score, basically, what percentage of the notes the large language model thinks refer to a surgical site infection. When we run this on our historical data, we see that the large language model thinks somewhere between zero and 60% of the notes refer to a surgical site infection. What we can do then is draw a threshold, say 5%. What we do is have the human review everything above 5%. We still need a human to understand the nuances of what counts as a surgical site infection—reading the lab results, reading the imaging results. 

We need a human to identify the true positives, but there are many true negatives. Somebody who never has an infection, does great, doesn’t come back to the health system—those probably aren’t worth a person’s time to review. If we can identify the true negatives, the reviewer could spend more time on the true positives. What’s fun about this is that the numbers are exceptional. For our preliminary data, looking at 2023 and 2024, if we set the threshold at 5%, the reviewer would be reviewing 70 to 80% fewer cases. We would miss somewhere between two and five cases. It’s not perfect, but we are nudging toward a world where we can spend our time reviewing the cases that are problematic and positive, and less time filtering through the vast majority of surgeries that don’t have an infection or an issue, moving those to the back of the queue. We certainly aren’t replacing a person. We still need the person to be involved, but we are helping that person focus their energies on the cases where their expertise and their interpretation of what is actually happening is maximized.

Q: Yes, that’s great to know. As we all know, Gen AI and LLMs are becoming all-pervasive. You have a large workforce at your health system, people at various skill levels, and now they have to either use some of these systems that are going to be deployed or are already participating in what you described as a use case process. If there is more appreciation of Gen AI, LLMs, and AI, they would be in a better position to do their job and embrace it. So how do you go about learning and development and upskilling the workforce? What are your thoughts about that?

Keith: That’s a great question, and I think it is foremost on the minds of health system leaders who are hoping to use AI in any meaningful way. The reason it’s so important is that it is totally unrealistic to expect somebody to use a tool that they are unfamiliar with, particularly when that tool is something as amorphous and multifaceted as a large language model. If you think about it, if we use the analogy, large language models distilled down are basically like office productivity tools. Think Microsoft Excel—Microsoft Excel has been around for 40 years. I used it for middle school projects. My mom used it when she was working at a bank in the eighties. 

Most people, by the time they enter the workforce, have experience with using Excel. It’s not unreasonable to have that be a requirement of the job, or when Excel gets augmented in new ways, for people to be able to jump on it. It’s relatively straightforward. We have none of that with large language models. Expecting people to use these things without a ton of training is really unfair and unrealistic. That applies up and down the organizational chain. What I mean is that leaders, just because you are an executive VP, you have exactly the same two and a half years of awareness of these large language models as everyone else. Historically, people could turn to coworkers or get extra training, but that doesn’t exist here because it is so new throughout the organization. You make an excellent point about needing to upskill our workforce and attaching an experiential component to that training. 

Sometimes we hear about organizations using online modules for training, but I don’t think this is the sort of thing where watching some videos will really give you the insight you need to understand how these tools work. The reason it’s trickier in healthcare is that it’s one thing to tell somebody, “Hey, go play around with ChatGPT. See what you learn.” But we don’t want people to use ChatGPT to come up with recipes or poems in pirate. We want people to use large language models for their job because we suspect there are major productivity benefits that can come out of this. In a health system, most people’s jobs involve PHI—patient health information. You can’t put PHI into public models. So we need to make things available. We need to make a PHI-compliant large language model available to our employees. 

At Stanford Children’s and our adult hospital, we have made a large language model that is PHI-compliant available to our entire workforce. We’ve had it available for about a year now. Our IT department is called Digital Information Solutions, DigiIS, and our chatbot is called Ask Digi. The idea is to encourage people to start experimenting with an appropriately provisioned large language model to figure out how it can make their life better. I have no idea how somebody in a revenue cycle role could use large language models to make their life better. Most folks in that role don’t know yet either. We’re going to figure it out in a couple of years, but we have to let people experiment to learn that. 

We have three broad approaches for training: One is online modules and training. We have a couple of training tools, both generally about what large language models are and about Ask Digi and the local tools we have available. The second is prompt engineering workshops. Two of our clinical informatics fellows have developed a two-hour hands-on, in-person workshop on how to develop prompts and how to know that your prompts are doing what you want. Engaging with a large language model through a prompt is a totally new skill. Starting to get people to understand what is required in a prompt, lowering the fear factor, and giving people the confidence that you don’t have to be a computer scientist to prompt these models—it’s relatively accessible—and that comes through education sessions. The third is having a local champion or expert who gets involved in a pilot project and then brings that insight back to their teams. For example, in our work with surgical site infections, we are working with the folks in infection prevention and control. 

They are seeing the ways that large language models are helpful and not helpful. They are learning alongside us. Those folks will now become the experts in their team for how these tools can be used. Hopefully, that will propagate into more training. Engaging folks in these pilot use cases is helpful not only because you learn about the use case, but also because you train the person in that area, creating ripple effects across the organization.

Q: That’s great to know that there are so many initiatives for upskilling and learning, and development in the organization. One other topic that I thought I would touch upon—we talked about it a little—is PEDSnet. You mentioned it early on in the conversation today. I would love for you to share with the audience how this helps in terms of interoperability and data sharing, and any new aspects it might bring as well.

Keith: Definitely. Happy to. Just for a quick background, PEDSnet is a pediatric EHR data-sharing consortium that has been around since 2014–2015. It’s primarily run out of CHOP—Children’s Hospital of Philadelphia—which serves as the coordinating center.We have a relationship with them, as all members of the network do, and we send them a harmonized version of our EHR data four times a year. The harmonization is based on an OMOP model. The Odyssey community is an open-source EHR standardization initiative, and PEDSnet has adopted their common data model as the bedrock of what we use. 

We have some minor modifications that are specific for pediatrics and for care in the U.S., but it enable us to share data to a central location and conduct studies with a volume of patients that is unmatched elsewhere. About 15% of the children in this country are represented in our PEDSnet database. It’s a chance for us to do large-scale studies at a fraction of the cost it would otherwise take to develop these types of tools. As we move into the world of large language models, it’s not hard to envision a future where we are able to have a large language model help us process unstructured information from these different sites, extract relevant insights from notes, and conduct large-scale studies using unstructured data. That’s not here yet, but it’s in the near future. It’s really exciting to think about the potential.

Q: That’s awesome. Talking about the near-term future and potential, if you were to look in the crystal ball, what are some of the things that you see coming down the pipe? Agentic AI that people are talking about, virtual nursing, and ambient listening. There’s so much going on. What do you think are some of the big things coming our way?

Keith: Definitely. We are rolling out a pilot of ambient listening and have a similar cue as many organizations for things that we’re going to be adopting in the near future. Taking a slight step back from a regulatory and oversight perspective, it’s important to remember that those types of issues aren’t going away, regardless of how excited we are about the technology. Any investment in technology requires an understanding of the upsides and the downsides. I think we’re currently a little over-indexed on the upsides of this technology. We’re very excited. 

From a regulatory and oversight perspective, it’s important to remember that those types of issues aren’t going away, regardless of how excited we are about the technology. Any investment in technology requires an understanding of the upsides and the downsides. I think we are a little over-indexed right now in terms of what the upsides of this technology are. We’re very excited about what it can do now and what it can do in five years. We’re starting to see some efficiency gains. The feedback, particularly around DAX and other ambient listening, is generally very positive. We are less concrete about the downsides, and what I mean by that is there’s lots of talk about potential issues that come with AI. 

In the past, we have relied on federal or state agencies to provide oversight to make sure that those downsides aren’t present or are appropriately mitigated and recognized. AI, and particularly large language models, is proving very difficult to regulate because it’s such an amorphous entity. I think it is unrealistic that we’re going to see a robust regulatory system in the near future. What that means is that the burden for making sure this technology works is falling on the provider, and that’s great—until it’s not. What I mean by that is the burden is on providers to use these tools appropriately. At some point, we’re going to see a lawsuit from someone who claims they were harmed because of a large language model’s involvement in their care. Once that happens, we are going to get a very concrete piece of evidence about the types of downsides inherent in using these tools. We haven’t reached that point yet. 

Right now, the downsides are so amorphous that they’re easy to ignore. Once there is a price tag attached to the cost of mistakes, then things become different. If that price tag of a mistake is enormous, the overall value of these tools could change substantially. We know that, particularly early on, we are potentially introducing risks and mistakes with the use of these tools. Even if the output of a large language model is 99% accurate and you have a human in the loop who is reviewing it, and their review is 99% accurate, there are still errors that are present. Part of the reason I bring that up is that at Stanford, we take it very seriously. We are ultimately responsible for the use of these tools, how they impact our providers, and how they impact our patients. Nobody is going to take that responsibility from us. That is appropriate, but we are building systems and processes with that worst-case scenario in mind in order to prevent it from happening.

Q: That is very cautious. So with that, I think this is a great session. Any closing remarks?

Keith: Thank you so much for providing this opportunity. It’s exciting to be able to talk about the work that we’re doing here. I think there’s lot to discuss regarding pediatric implications, so maybe we’ll find some time in the future to talk again.

We hope you enjoyed this podcast. Subscribe to our podcast series at www.thebigunlock.com and write to us at info@thebigunlock.com   

Disclaimer: This Q&A has been derived from the podcast transcript and has been edited for readability and clarity.

About the host

Paddy is the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy is also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He is the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He is widely published and has a by-lined column in CIO Magazine and other respected industry publications.

About the Host

Ritu M. Uberoy has over twenty-five years of experience in the software and information technology industry in the United States and in India. She established Saviance Technologies in India and has been involved in the delivery of several successful software projects and products to clients in various industry segments.

Ritu completed AI for Health Care: Concepts and Applications from the Harvard T.H. Chan School of Public Health and Applied Generative AI for Digital Transformation from MIT Professional Education. She has successfully taught Gen AI concepts in a classroom setting in Houston and in workshop settings to C-Suite leaders in Boston and Cleveland. She attended HIMSS in March 2024 at Orlando and the Imagination in Action AI Summit at MIT in April 2024. She is also responsible for the GenAI Center of Excellence at BigRio and DigiMTM Digital Maturity Model and Assessment at Damo.

Ritu earned her Bachelor’s degree in Computer Science from Delhi Institute of Technology (now NSIT) and a Master’s degree in Computer Science from Santa Clara University in California. She has participated in the Fellow’s program at The Wharton School, University of Pennsylvania.

About the Host

Rohit Mahajan is an entrepreneur and a leader in the information technology and software industry. His focus lies in the field of artificial intelligence and digital transformation. He has also written a book on Quantum Care, A Deep Dive into AI for Health Delivery and Research that has been published and has been trending #1 in several categories on Amazon.

Rohit is skilled in business and IT  strategy, M&A, Sales & Marketing and Global Delivery. He holds a bachelor’s degree in Electronics and Communications Engineering, is a  Wharton School Fellow and a graduate from the Harvard Business School. 

Rohit is the CEO of Damo, Managing Partner and CEO of BigRio, the President at Citadel Discovery, Advisor at CarTwin, Managing Partner at C2R Tech, and Founder at BetterLungs. He has previously also worked with IBM and Wipro. He completed his executive education programs in AI in Business and Healthcare from MIT Sloan, MIT CSAIL and Harvard School of Public Health. He has completed  the Global Healthcare Leaders Program from Harvard Medical School.

About the Legend

Paddy was the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor &  Francis, Aug 2020), along with Edward W. Marx. Paddy was also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He was the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He was widely published and had a by-lined column in CIO Magazine and other respected industry publications.

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation.

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation.