Episode #57

Diana Nole, EVP and General Manager, Nuance Healthcare and Yaa Kumah-Crystal, MD, Assistant Professor of Biomedical Informatics, Vanderbilt University Medical Center

"Voice technology will enhance care delivery from within the EHR"

paddy Hosted by Paddy Padmanabhan
bigunlock-podcast-homepage-banner-mic

Our partner:

In this episode, Diana Nole and Dr. Yaa Kumah-Crystal discuss the progress, future state, and challenges of voice-enabled technology in healthcare. They also talk about its usability and application in a post-COVID-19 world.

According to Diana, in a post-COVID world, we will see more acceptance of voice-enabled technology not just for clinical documentation but as a virtual assistant to command and control things within the physician workflow ecosystem. The pandemic accelerated the willingness and acceptance to look at things differently, such as telehealth; voice technology will be the next. It will be helpful in offering suggestions and recommendations to enhance care delivery from within the EHR system.

Dr. Kumah-Crystal states that the new era of voice mechanics and how we interact with the voice technology is instrumental in making queries and commands in the EHRs to retrieve information. A new dynamic of patient engagement will emerge from voice as a medium and as a method by which a provider engages with EHR in the presence of patient. Take a listen.

Diana Nole, EVP and General Manager, Nuance Healthcare and Yaa Kumah-Crystal, MD, Assistant Professor of Biomedical Informatics, Vanderbilt University Medical Center in conversation with Paddy Padmanabhan, CEO of Damo Consulting on the Big Unlock Podcast – “Voice technology will enhance care delivery from within the EHR”

PP: Hello again, and welcome back to my podcast. This is Paddy, and it is my great privilege and honor to introduce my special guests today, Diana Nole, EVP and General Manager of Nuance Healthcare. And Diana is familiar to our audience. She is coming back and joining us. She’s been on this podcast before. We’ll talk a little bit about that. And Dr. Yaa Kumah-Crystal, Assistant Professor of Biomedical Informatics at Vanderbilt University in Nashville. Welcome to the show. Let me kick this off. Diana, I think this may be a question for you to start with. I’ve always considered voice to be one of those highly promising, emerging technologies that is going to transform the way we live and work. In healthcare, we have struggled with how technology has taken away some productivity, even though it’s delivered a lot of other benefits. Voice has the ability, and voice enablement to voice recognition is potentially one of those technologies that could ease the burden on physicians. That’s been the thesis for the rapid growth of voice enablement in healthcare. So maybe you could start by sharing with our listeners a brief overview of the progress that we have made as an industry with voice enablement and healthcare. Where is voice finding its application today, especially in a post-COVID-19 scenario?

DN: Well, voice has definitely been on a journey. It’s not new to the industry. As I had mentioned, I’ve known Nuance now for 15 years. I’ve recently joined them on June 1st. But voice, dictation, the aspect of taking the ability of this technology to do clinical documentation has been around for a while. More recently, I think with the capabilities of all of our data processing, etc., we’ve definitely advanced where it’s much easier to get adopted. You don’t have to train the system as much as getting much more accurate. And so, the ability to get broader sets of users to use it has definitely kind of come up. What you see now in the post-COVID world is even more acceptance of things where you can use the voice enablement, not for just clinical documentation, but a bit more with like being a virtual assistant and being able to command and control things within the ecosystem that the physician is working in. For example, the announcement on UpToDate was to be able to search through voice and be able to say, hey, dragon, pull up what’s on UpToDate on this particular topic? And I think in the post-COVID world, just a simplistic thing that we’ll probably hear more from here on the show is just the contactless ability to drive and control commands. And we’re getting actually interested in not just the physician, but medical devices and use of that. And so, we also think that there’s gonna be more people wanting to use voice to kind of use in this post-COVID world.

PP: Yeah, that’s interesting. Contactless experience has kind of become a big buzzword and a theme as people start going back into the clinics and hospital environments. We will unpack that a little bit more. Now, when we had you as a guest on this podcast, you were at that time with Wolters Kluwer leading their healthcare business. And now you’ve recently done a partnership with Wolters Kluwer to help clinicians and researchers using the capability for voice-enabled content search. Can you talk to us a little bit about what that means?

DN: Yeah. So, what you can actually do is, you can say, hey, Dragon, search UpToDate for particular treatment options. And this actually then helps the clinician retrieve information in UpToDate. A big thing with physicians is not having to go between systems, but just having it seamlessly. And, then you can retrieve information in UpToDate, a leader in clinical decision support. You can get medication, dosage, disease stage, drug interactions, all that stuff is readily available in UpToDate. And then, you can also with the dragon engine, be able to do commands in terms of what you want to actually have the EHR do. So hopefully being able to get information more easily accessible, efficiency, productivity, just a better user experience. So that’s what we’ve done with UpToDate. And we think that there may be some other things that we can do together on that, so I’m very excited, although I’ve left. It’s really nice to continue to work together in that partnership.

PP: Yeah, yeah. Sounds really exciting. Dr. Kumah-Crystal, you have been using this new technology at Vanderbilt University. Can you tell us a little bit about how you’ve been using it? Are you using it as patient experience, context? Are you using it for research? Can you tell us a little bit about where you’re using this?

YK: Yes. So, I’ve been a voice enthusiast for a long time. I’ve been using dictation to keep my notes. What I’m so excited about is this new era of voice mechanics and how we can interact with the voice technology outside of just the dictation, which is extremely useful, but also to make queries and commands in the EHR to retrieve information. I just think it’s a really exciting new way to interact with technology, because so often when we need to find out something, we’re forced to drill down through different tabs and scroll through sheets and whatever. It’s hard to fight the technology just to give us the information we need. But just to be able to say a command, to make a request and have the information retrieved for you, just takes away some of the burden and irritation of the technology that has kind of integrated itself in our regular workflow. In medicine, it’s a culture of asking questions and making requests. As an attending, I am usually surrounded by fellows and residents and nurses, and we have our morning rounds and we talk about the patient and someone will ask like, hey, what was her last sodium? Or Hey, put that last journal article for so-and-so. And to be able to use that same method, that same medium to ask information for that electronic health record, makes it a more nuanced part of our care team as well, where you can interact with at the same level you’d interact with rest your colleagues.

PP: Yeah. So, what is the big play here? Is it productivity, is it advanced intelligence? What is the play here?

YK: I would say easing the friction of getting to where you want to get to. The whole point of the EHR is that we input information so that at some point we can get it out more efficiently. Unfortunately, because of limitations with time and money and whatever it takes to make it more functional, it’s not that easy to get information out. It’s always several keystrokes away, several tabs away, several lots of things away. But to be able to make a command verbally and instantiate that thing you want, just relieves some of that frustration you have, or you feel like you’re always having to go through a journey justifying the thing you need. I was trying to explain this to my nine-year-old son. He was like, oh, it’s kind of like being a wizard. You just say a spell and it happens like, well, that’s a very nine-year-old way to think about it. But I think I like that metaphor. You just kind of act on the things and they come into being. And I think that’s this part of the value of being able to articulate the things you need.

PP: Yeah and again usability has become a hot topic in healthcare. And in some of the work that we do, usability as a term is finding its way into all kinds of contexts, usability for patients when they come online to access care. And now we talk about usability for caregivers in order to get to the information they need so they can quickly get to answer those pointers for taking care of their patients’ needs. What about the other side of the table? What about patients? How do they get to see the benefit of a voice recognition technology? Is there something that providers are doing to enable voice recognition when a patient walks into the clinic, for instance? You know, Diana talked about this contactless experience. Is that something that the patient can take advantage of as well? Or is it mostly confined today to the caregiver side of the business?

YK: I was so excited to answer this question. In terms of how the patient benefits, there are different kind of ways and manners which the patient benefits. From the provider facing side of things, if a provider can easily call out orders to say like, oh, place a consult for social work or refill the metformin, and maintain their contact with the patient while just asking for those things to be fulfilled, as if they had a scribe in the room or something like that. That itself just helps the patient and the provider feel more connected like they’re in the same place together. And the provider is not distracted by having to pull away and go to their computer screen to answer these things. Also, I think there should be a study of this, but just the benefit of the patient hearing the provider place these orders or make these requests, for patients have better understanding of what is going on in their clinical encounter to know what things the provider thinks is important, to know what things the provider wants to call out. And maybe that would even make it more engaging to the patient. Make them want to ask more questions as about why would we want to try metformin or why did you ask about this specific thing? I think there’s a new dynamic, an element of patient engagement that will absolutely stem from being able to have voice as a medium and as a method the provider engages with EHR while the patient is there. But on the patient-facing side, there’s actually a lot of great work going into having patient-facing voice assistance so the patients themselves can interact with the EHR. And I think that’s just a wonderful opportunity to have people who might not be as comfortable with technology and navigating computers, just be able to talk to their machines and get the information back out. So, I think that’s really, really exciting and can really decrease barriers for people with disability issues because everybody knows how to talk. So at a very early age on people know how to engage with computers and with media using their words and to be able to fully leverage it, I think can take us just a whole another plane of usability and productivity and engage with it. 

PP: Yeah, that’s that is so well said. The importance of having a natural language interface that not only increases your productivity, but also provides some degree of comfort and ease during the course of the doctor-patient interaction is definitely something that I see a lot of other firms paying attention to as well. Now, you mentioned scribing as one of the core tasks of this voice-enabled interface. Now Diana, I want to ask you this question. There is obviously a huge amount of opportunity headroom lift, if you will, for just being able to use voice to do things like scribing, which can actually release a significant amount of time for physicians, but also improve the doctor-patient interaction so that physicians and their patients can have eye to eye contact, and all of the others that has been talked about a lot. What’s next? Tell us a little bit about what you see as the roadmap for the future. Where do you think we can hope to see, let’s say, advanced analytical tools being used in the context of voice recognition to improve our ability to do more advanced tasks, risk assessments, or just being able to predict things from a person’s voice? I’ve read that you can actually read biomarkers in the tone of the voice. Can you talk a little bit about some of the future state that is emerging from voice?

DN: There’s some interesting things. The last note that you had there made me think of something that we recently talked about from Nuance, and that is actually being able to recognize maybe the age. And I’m not quite sure exactly how I would apply that in healthcare, but I think you’re right on in terms of the things that it will allow us to do. What we’re really excited about is, moving from sort of voice and sort of an interaction with one person and the machine to being in this ambient environment. And that is really where we’re focused on. That brings great interaction between the physician and the patient, because now it’s really in an ambient environment. Your diarizing the conversation between the patient and the doctor. And I think that builds a lot of transparency, but also a lot of clinical and other types of accuracy of what’s being captured. And then if we can get that into a very good structured format. Then, the hospital itself can run a lot of analytics on that. You can continue to do sort of the voice commands. But what I see in the future is also the machine helping to catch things that might be within the EHR, or other items that would offer up suggestions, recommendations either in the visit or post a visit to continue to enhance and make sure that nothing falls through the cracks for the patient. And I think when you think about the ambient environment and then what we talked about with patient interactions and producing this capability for other care providers, such as nurses, et cetera, it will definitely unlock and bring back a little bit of what we’ve talked about in the past of bringing back that trust between the physician, the doctor, and their patient. So, I think the whole ambient environment will unlock yet another capability of being able to do analytics, recommendations, those types of things. And that’s what we’re heavily working on right now.

PP: Yeah, ambient computing has, become another hot topic because of all of the possibilities – to be able to remotely monitor or observe what is going on with a patient and being able to pick up things through voice, and other natural language interfaces, especially now in the COVID context. So, does your technology kind of seamlessly integrate with the EHR systems and other decision support tools? One of the big challenges in healthcare is this. All these technology tools, it’s a challenge to make them all work together in a seamless fashion. It’s getting better, no doubt. But still a lot of unfinished business. Do you want to talk a little bit about that?

DN: Well, with our rich history in healthcare, that’s something we rely heavily on, and we definitely have to have those connections. We had long-standing relationships with the EHRs. We can’t do without them, as you said. So, we do have that interaction with them, the virtual assistant. We work with them on how we actually get that information back out and then get it back in. You may have seen recently we did announce, for example, connections with Cerner on that. So, we’re very excited about that. We cannot make it work without it. And that’s why it’s so important for us to be sort of agnostic. We do the same thing in terms of telehealth platforms. So, we work with various telehealth platforms, so we can provide the opportunity to use it for the doctor when they’re in the office or on telehealth. It eases their not having to use a different tool. And then you really just have to work with all these different systems. And that’s something I think collectively as an industry, we are getting better and better at.

PP: Looking into the future, today, when you look at text-based interfaces – you go on your iPhone and you start typing out a text message – it finishes the sentence for you because it’s been observing what you write or what people like us to write on a normal course of the day. It’s been analyzing billions and billions of these messages. It helps you to complete the sentence. Do you think the voice is going to get there? You know, you start to see something, and the voice-enabled interface is going to complete the sentence for you?

YK: I think it’s going to depend on what your end goal is. I think there might be some folks who would find that really beneficial. And again, going through the concept of accessibility, that might be a feature for some people. For others, I think most people really look forward to technology helping to facilitate and optimize what they’re already doing. And one of the joys of being a doctor that is often kind of pulled away from us is engaging with the patient, having a conversation, learning about their story, and able to give them advice. But because you’re often having to pull away to turn back your computer, to type it, you don’t have the opportunity to do that. So, having something like an ambient scribe that can match all the words you say to create your note for you. So, you don’t have to do that will give you the opportunity to be present in that way and complete your sentences yourself. But yes, it would make sense for some folks, for whatever reason, to have a tool that can produce those numerations for them. And I absolutely love that feature in a phone and email; auto-suggests, and complete sentences for you. I also wonder if it’s saying what it thinks I would have said or it’s suggesting what I should say, and if the results of my email are really just the computer’s mind. Regardless, it sounds good and it’s all spelled correctly. So, I can just hit send and save myself an extra five minutes.

PP: I’m not so good with the auto finish. More often than not, I’m sending the wrong message out and manually correcting you.

YK: That’s an interesting point that you bring up. And with regards to the technology kind of just working and not having to worry about all the setup and integrating all that stuff. One of the biggest limitations in the past about voice technology was that because of the word error rate, you almost spent just as much time having to go back to fix the things that it thought it heard as you were trying to dictate. That was a huge barrier to adoption. But with machine learning techniques, even without training, a novice can pick it up and just get started. And I think that’s one of the big factors in making this a more mainstream thing that anyone can and would adopt, because if all you have to do is talk. And that’s something I had to do anyway. Then what’s the Problem?

PP: You did bring up something that I was going to bring up in the closing minutes of our conversation, which is what are some of the challenges with the technology? Obviously, the error rate is one of them and the error rate could be linked to a lot of different things. Accents, for instance. We live in a very diverse professional environment. Healthcare as much as any other industry is very diverse. Do you see this as technology, therefore, that needs to evolve a little bit more? I do agree with you, you know, from all accounts, it’s come a long, long way in the last few years. Diana, where do you see these multilingual capabilities headed?

DN: Yeah, I definitely think that there are going to be some, you know, what’s the level of accuracy that we can, that really delivers the right results. As was mentioned before. So, I think that will continue to get better. And so if you definitely think about the future, where I talked about, you know, being able to scour things and offer recommendations, I do still think that that’s a vision that can be achieved. But it will take a while because, as you know, we all get started, those recommendations from where we’ve shopped, et cetera, and not all of them are quite accurate. I think the other thing that people have helped me to remind myself is that when you think about this type of interaction and patient and patient interactions, we do have to remember that many of our patients still don’t have access to the technology. So, I do think we also want to continue to keep in mind the evolution that our patients are going through. But I am very, very optimistic. I think the COVID-19 has actually accelerated everyone’s willingness to look at things and do things differently. Telehealth is a great example. Voice will be the next. I’m very optimistic that there will actually be some really wonderful, positive things coming out of a very challenging circumstance.

PP: Fantastic. And I guess on that note, we’re going to have to leave it there. Dr. Kumah-Crystal and Diana, it’s been such a pleasure speaking with you. And I look forward to following all the progress with voice. I got to tell you, I am personally very, very interested in where the technology can take us at a personal and professional level. And I look forward to following all the work. Thank you once again for being on the show.

DN: Thank you.

YK: Thanks for having us.

We hope you enjoyed this podcast. Subscribe to our podcast series at  www.thebigunlock.com  and write to us at  info@thebigunlock.com

Disclaimer: This Q&A has been derived from the podcast transcript and has been edited for readability and clarity

About our guests

diana-nole-profile-pic-aug2020

Diana Nole joined Nuance in June 2020 as the Executive Vice President and General Manager of Nuance’s Healthcare division, which is focused on improving the overall physician-patient experience through cutting-edge AI technology applications. She is responsible for all business operations, growth and innovation strategy, product development, and partner and customer relationships.

Over the course of her career, Diana has held numerous executive and leadership roles, serving as the CEO of Wolter Kluwers’ Healthcare division and president of Carestream’s Digital Medical Solutions business. She was instrumental in bringing Wolters Kluwer's healthcare product offerings together into a suite of solutions incorporating advanced technologies to drive further innovation. Under Ms. Nole's leadership, Wolters Kluwer formed a centralized applied data science team that accelerated the successful introduction of next-generation AI-based solutions for data interoperability, clinical surveillance, and competency test preparation for nursing education.

Ms. Nole is a board director and Chair of the audit committee for the privately held life sciences company, ClinicalInk, and was recently named the first female Chair of the board of trustees of St. John Fisher College, home to the Wegman's Schools of Pharmacy and Nursing. Diana has dual degrees in Computer Science and Math from the State University of New York at Potsdam and earned her MBA from the University of Rochester’s Simon School.

Yaa Kumah-Crystal, MD, MPH, MS, is an Assistant Professor of Biomedical Informatics and Pediatric Endocrinology at Vanderbilt University Medical Center (VUMC). Dr. Kumah-Crystal’s research focuses on studying communication and documentation in healthcare and developing strategies to improve workflow and patient care delivery. Dr. Kumah-Crystal works in the Innovations Portfolio at Vanderbilt HealthIT on the development of Voice Assistant Technology to enhance the usability of the Electronic Health Record (EHR) through natural language communication. She is the project lead for the Vanderbilt EHR Voice Assistant (VEVA) initiative to incorporate voice user interfaces into the EHR provider workflow.

Within VUMC HealthIT, Dr. Kumah-Crystal functions as a Clinical Director. In this role, she works across clinical systems, to perform internal reviews on and provide advice about EHR change and integration projects, with the goals of optimizing products and processes. Dr. Kumah-Crystal remains clinically active and supervises Pediatric Endocrine Fellows and sees her own clinic patients. Her research and related publications define the use of technology to improve care and communication for providers and patients.

About the host

Paddy is the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy is also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He is the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He is widely published and has a by-lined column in CIO Magazine and other respected industry publications.

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation.

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation

The Healthcare Digital Transformation Leader

Stay informed on the latest in digital health innovation and digital transformation.