Podcast with Mike Alkire, President of Premier Inc. and Dr. Jonathan Slotkin, Vice Chair of Neurosurgery and Associate Chief Medical Informatics Officer of Geisinger
In this episode, Mike Alkire, President of Premier Inc. and Dr. Jonathan Slotkin, Vice Chair of Neurosurgery and Associate Chief Medical Informatics Officer of Geisinger discuss how technology and data is helping public health officials to keep a balance in opening the economy versus managing the spread of COVID-19 virus.
Premier recently launched a syndromic surveillance tool for COVID-19 which they are piloting at Geisinger to improve the quality of medical interventions and prevent the spread of the virus. Mike believes that there is a need for syndromic surveillance system, contact tracing, and performing tests with higher accuracy rates.
According to Jonathan, siloed information and disparity in EHRs across health systems limits the scope of innovation and in case of COVID-19 it is affecting patients directly. He further states that, as part of a public-private partnership, Geisinger is performing contact tracing and have followed up on 1,600 COVID-19 positive patients, benefiting patients, providers, and communities.
PP: Hello again everyone, and welcome back to my podcast. This is Paddy and it is my great privilege and honor to introduce my special guests today, Mike Alkire, President of Premier and Dr. Jonathan Slotkin, Associate Chief Medical Informatics Officer and Vice Chair of Neurosurgery at Geisinger. Dr. Slotkin also has a dual role with Contigo Health as the Chief Medical Officer. Gentlemen, welcome to the show. Tell us a little bit about the COVID-19 surveillance tool that Premier has just launched, and you’ve started piloting it at Geisinger.
MA: Paddy, over the last year or so, we have been building out technology to help with the PAMA guidelines, which are guidelines that CMS is implementing to get after high-cost images. The focus has been on building up these pipes to Epic and Cerner and these electronic medical records to ensure that patients were appropriately utilizing these high technology images. So when COVID hit, we sort of pivoted the technology. And because we already had the pipes built into all the EMRs, we found out that if you looked at the symptoms of patients, there are a number of characteristics around the symptoms that you could see that there is a high probability these patients were COVID patients. And we thought that it was incredibly meaningful because we could do it in real-time. So, at the point when the physician is meeting with that patient, we can identify somebody that has those critical symptoms. Given that data, we can dive down into the zip code level. We can use that data or get that data to organizations that are interested to understand where surges are occurring or where there is a high prevalence of the disease. Also, there’s obviously a lot of interest on behalf of the federal government and the states to understand where surges are happening. The whole idea is to provide this real-time data mechanism to inform these public health officials around “do I open the economy” or “do I keep it shut” or “open in some degree, but I see a surge, am I putting the appropriate resources in those communities?” We think it’s very, very critical and it’s part of a three-legged stool. We think, to manage the virus you need this syndromic surveillance. We obviously think you need this contact tracing. And we need to do a better job of rolling out testing with higher accuracy rates.
JS: Paddy, the problem we all wanted to solve for is that existing syndromic surveillance in 2020 is dramatically lacking. I think it will surprise many of your listeners when they hear what those systems actually consist of. So existing state and federal syndrome surveillance consists largely of reactive, non-real time reporting of disease diagnoses. And by the way, that are picked up mostly by emergency departments. These tools run on 20-year-old technology and are not automated. And in some areas, clinicians and public health officials actually need to print data from EHRs, manually fill in, and fax reporting forms to public health officials. Some of these forms take up to 30 minutes to fill out. And in some instances, the lag between a patient receiving a positive test result and the reporting of that data can be as long as seven days. And Paddy, you’ve spent a lot of your career on this problem. We’ve troves of important data like positive COVID results, signs, symptoms, but sitting in siloed EHRs across different hospital systems in care settings across the country. So, the nation desperately needs an automated, real-time, effective national surveillance system, and that was the major impetus for this work. The team set out to build exactly that and the goals were to build an application that can be used by a health system, states, and federal government, just like Mike said, to perform several really important tasks like to know when and where COVID is surging before the numbers tell us that, to better determine which patients are more likely to become profoundly ill, and to provide healthcare systems with risk and severity adjusted information to predict findings. So the tool uses natural language processing and machine learning to scan free-text notes and orders for hundreds of phrases like trouble breathing, or loss of taste, and other free text and discrete data for signs, symptoms, and other indicators of infection. By using this approach, the system is able to rapidly identify patients who are presenting with signs and symptoms of COVID-19.
PP: This is very interesting and of course very timely as well given everything that we are going through today. The tool is essentially an NLP algorithm that mines clinical notes and information in the form of text, and unstructured data essentially sitting inside electronic health records systems. And this is the route that many COVID-19 apps are taking in the context of dealing with the pandemic and having early warning surveillance systems. Jon, can you talk a little bit about how you use this information as a decision support tool not just to flag patients at risk of infection, but in terms of closing the loop? What do you do with that information? What happens next? How do you adjust your care management or treatment and how do you integrate it with your reporting requirements?
JS: We and other health systems are very eager to start using this application. In addition to Geisinger, Atrium, Community Health Network, Advent and I think over 30 other systems are coming online with the application shortly. There are some really valuable ways that health systems can use the information from this application, even above and beyond this important work of syndromic surveillance. I think that systems can identify flare-ups based on health systems’ zip codes. We think often it will be one to four, even more days, before lab test results come back in some instances. In some patients that don’t even get tested or wouldn’t have been tested, usually a week or more before hospitalization based on symptom progression. With this kind of foresight, systems can do things like plan decrease and elective procedures well in advance of being just reactive to public numbers, forecast equipment that an ICU needs based on incidents and even the severity of disease that the tool picks up in the outpatient setting. The tool can also identify patients in the ambulatory setting that are high risk for admission or maybe are more appropriate for a home care environment with home pulse oximetry or other programs. It is important to call out two really powerful features that are coming to the app in the next several weeks. One is that the system will present a pre-test probability based on symptoms to help providers interpret negative diagnostic test results, which we know can be inaccurate, sometimes significantly inaccurate, and both true negatives and false negatives, for that matter. This is where you get to the action at the point of care which Premier always thinks about. The team has also embedded the NIH COVID treatment guidelines right into the CDS tool. I think it’s important to point out that Stanson tool has over 300 hospital system customers. So, this affects and is live and can be live at over two to three hundred thousand providers systems. In this way, with treatment guidelines at the point of care, you can support providers with real-time interventions and to translate evidence into practice, which I think is a core mission for Premier.
PP: One of the things that I read about when I saw the news release on the tool is that it works across different EHR systems. And we all know that interoperability has been a challenge for a long time, it’s getting better, we have got the CMS final ruling that’s going to affect 2021. We are going to see more seamless data flow, but it is still a significant challenge. Can you talk about how do you look across Epic and Cerner as an example or other systems out there? How is this different from other COVID-19 tools that are out there?
JS: Paddy, siloed information and disparity in EHRs across different health systems, not only limits innovation, but in a situation like COVID-19, it’s affecting patients immediately right now. Thankfully, in the last few years we have all seen significant progress in these areas. But this tool, ADAM, which is Advanced Detection Analysis and Management, works well with Epic, Cerner, and I think it’s going to be live over the next couple of weeks or month or two in MEDITECH. As Mike mentioned, the rapidity of getting those solutions live across multiple EHR vendors comes from the fact that the backbone of this solution is Stanson’s PAMA tool that is live at 300 hospitals. So what this then brings is, from growing machine learning standpoint, you’re going to get the combined experience and data of all of these hospital systems across three and now soon to be 40 EHR vendors that will allow powerful improvement of the systems’ machine learning algorithm, not just from one system, but from all of them. This data is never going to be sold to pharma companies and device companies, but there is power in the aggregation of this data. Mike can elaborate, the advanced discussions with several states and parts of the federal government. But important to be clear here, and we know at Geisinger that this data that Stanson and Premier have will never be shared with any outside parties like a state or federal agency without the provider systems written permission, which I think many providers systems, given the mission that we’re trying to accomplish here, would be open to.
MA: The only thing I’d add here is that Premier has taken a pretty significant focus from an advocacy standpoint for interoperability. For all the reasons that Jonathan said, we obviously want the ability to track a patient throughout the progression of the disease, no matter where they’re actually getting care provided. We spent a lot of time working with various datasets to integrate those and work with these EMR vendors, and other vendors to ensure that they have got open data sources. To Jonathan’s point, I do want to sort of make sure to tie this all together from a COVID standpoint. So the reason it’s so meaningful for the states and the feds to sort of step up here and really look at that three-legged stool of controlling the virus is that there is such a high false negative testing depending on when you test versus when you actually get the disease. There were a couple of few articles three weeks ago, one from the Annals of Internal Medicine, the other from the New England Journal of Medicine. They talked about significantly high false negatives. That’s really an issue if you think about somebody’s on their way saying – you don’t have the disease and in fact, you have the disease. Those articles actually presented the fact that the further away you are from being tested when you actually acquire the disease, obviously your false negatives go down. So, you’re waiting, often times, two or three days to get decent results. And what we’re saying is we have the ability to do that real time looking at the symptoms.
PP: I want to dig a little bit deeper into this Stanson tool that you mentioned and how that creates synergies for not just the business, but at the level of the tool itself.
MA: The whole thesis for Stanson, for our investment from a capital standpoint really was, we’re a performance improvement company. We’re all about helping healthcare systems drive improvements from a cost reduction standpoint and a quality and safety improvement standpoint. What we had been doing over the years is obviously taking our best areas or amounts of data in the clinical settings and safety and operations, which is labor and supply chain, integrating those data sets and creating insights into performance improvement for the healthcare delivery systems. And that was great because those insights drove a ton of value. But what Stanson allows us to do is to really create an impression of those improvements. So, Stanson actually writes into the Epic and the Cerner and the Athena EMRs, the appropriate protocols that should be followed that are maximizing high quality, great safety, and low cost. That was the whole initial thesis. We wanted to hardwire those improvements to the point of care into the workflow at the EMR.
PP: It’s all about having the decision support tool at the point of care and being able to act on that. That is kind of the holy grail or the mantra for any kind of decision support tool. You pointedly mentioned that you are very careful about data privacy. I read a study recently, I think it was done by the University of Illinois in Urbana Champagne that looked at some 50 different COVID-19 apps and they were very concerned about the lack of clarity on what is really going to happen to the data. How are you explicitly providing assurances to your patient community that data privacy is going to be maintained, how do you ensure that? How do you execute that? When there are so different people getting access to it?
MA: Premier is an organization that’s been in clinical data analytics, labor data analytics, health information, patient health information for years. So, we have been at it for probably more than twenty-five years. We’ve got a very rigorous and consistent process to ensure that data rights are appropriately being followed. And our ability to deidentify data, we’ve been doing it for years. So, if there is an institution out there that has the ability to do it and has been doing it and that has processes and technologies to do it, it’s us.
JS: Paddy, I think for all of us it’s a fascinating time to think about balancing public health needs and privacy in our own minds and also even what each of us is willing to tolerate in our own personal lives during a worldwide pandemic. As Mike said, the Premier team feels that if it doesn’t have the trust of its partners and their patients, we don’t have anything; and Geisinger certainly feels that way. A lot of the apps that you mentioned are often going to be consumer-facing apps. It’s important to call out for anybody that kind of just dips into the surface of this, that this is not a patient consumer-facing application. This is a robust clinical decision support tool that’s been live for years and has been repurposed and sits with health systems’ EHRs. So what that means, is it sits with extensive BA’s and other agreements that all of Stanton’s existing work is covered by. It’s the type of software and activity covered by HIPAA and has privacy literally protected by law. It’s important to point out that existing syndromic surveillance in our states and country, as I mentioned, involves printing documents, filling some aspects out by hand often, manually keying certain forms, and sometimes even faxing results. That is absolutely a system which is not only not modern but is also insecure from a privacy standpoint. We think that this kind of automated, fully digitized, secured solution to disease surveillance, it leads with privacy and is a significant improvement over the existing model.
PP: What triggers the tool itself since this is more like a surveillance tool. What is the event that triggers this tool?
JS: So, for the informatics wonks there are three, and it started with one and then Stanson came and Geisinger helped and others have worked with Epic and other EHR vendors for the rapid expansion. And I should call out Epic and Cerner. But Geisinger is an Epic shop, so that’s the one I can speak to, has been a tremendous partner here. Understanding that during a national emergency, we need to always move smartly, and we need to move quickly. So, three triggers really fire the tools, ability to take a look and give actionable insights. One is the ordering of an imaging test and of course, in COVID that’s critical and is the backbone of what Stanson’s functionality always was. The other is the order of a COVID test, which is another great place to fire functionality that takes a look at natural language processing on free text and also does analysis on discrete data at the time. And the third is that when COVID test is resulted and the charts opened to analyze the COVID test. That’s a moment when there’s a dip-in and a look-in and Epic’s helped with this, done extensive analysis on the overhang time associated with this. And these are times significantly less than half a second in the hundreds of millisecond time frame.
PP: You mentioned false negatives a couple of times. Have you had a problem with false positives?
JS: Not really. False negatives are the big enemy right now, in terms of what have we seen, how do you validate a tool like this? Early testing that the team has done has found that when we look at symptoms using the methods that we’ve talked about and compare to a later positive PCR viral test, to answer your false positive question, probably about four percent. And so that’s really good but the team’s making it better. I think one really important way to make it better and also to validate it is something that’s ongoing with our health system now, and that’s retrospective cohort evaluation. So, we, and everybody, have months of medical records on patients who later go on to test positive and negative. And folks that do well clinically or unfortunately in some cases do not do well clinically. What we are doing is looking back at a cohort of patients who went on to test positive where they know how they did clinically, and also, a group went on to test negative. So not only does that allow validation but have a very big history in the machine learning and AI area. In fact, we can not only validate the tool there, but also do data driven research to tune and improve the algorithms to significantly increase the sensitivity and specificity of the tool with a known data set and tuning.
PP:: A related question on that, obviously, is evidence. And you are kind of going there at times. Are you building the evidence for this tool as you go along?
JS: Well, some of those initial looks that I mentioned have already occurred and led to that data I mentioned. The other studies that I mentioned, like the retrospective validation and the tuning is happening as we speak, from quality improvement and research perspective, because I do think it is quality improvement work. But as far as the machine learning algorithms tuning is concerned, that’s an ongoing iterative process that’s consistent.
PP: One of the things that has really impressed me is the level of public-private collaboration that COVID-19 has brought about. I have seen many examples at the state-city level. One of my guests on this podcast talked about what they’re doing in the city of Austin for instance. And I see many great examples of how public and private sector are coming together to really address this. Can you talk a little bit about how this tool is being used for public health in general? Let’s say in Geisinger you’re in Pennsylvania, you talked about how this is contributing to public health efforts and especially contact tracing and all that, which is not really a big thing.
JS: There’s a ton of important opportunity in this area. We know that contact tracing, etc., usually falls under local and state health departments, but they’re all spread thin. I think we all saw the study that Ars Technica wrote up that we would actually need three hundred thousand contact tracers to do this job right. Geisinger quickly realized that it’s already expert in managing testing, communicating results, and treating those who test positive. So, Geisinger is performing contact tracing as a public-private partnership and now has twenty-four employees spending significant parts of their workweek on contact tracing. As of a few weeks ago, the team had made over twenty-seven hundred phone calls to follow up on sixteen hundred positive patients. This directly benefits patients, providers, and communities. And how do you take the Stanson tool and actively connect that to states; Mike, I’m sure can elaborate on.
MA: I think at the end of the day, these health officials that we’re having conversations with are trying to really have these decisions from a public health standpoint, be informed by data and science. The idea is if you have what we suggested, which is that three-legged stool of testing and more advanced testing and getting more refined testing and better testing, plus contact tracing, which we always think is going to be something that is going to be debatable. Jonathan made a great comment early on about the debate of positive societal impact versus liberties being sort of tightened. But we do know there are a number of countries that are using iPhones and those kinds of things to track as to where folks have been that have the virus and to be able to alert people that they may have been exposed to the virus. That’s a very meaningful discussion that we need to have and the debate that we need to have in the U.S. around the importance of that. And then finally we have been talking about this syndromic surveillance and the reason it’s so critical is that if you’re the governor of a state, early on, governors of huge states decided to shut the entire state down when maybe there was only a surge in eight, nine percent of all of the counties that represented, 60 or 70 percent of the population. But those other counties were very limitedly impacted. So, all we’re saying is that there is technology and there is data that at the zip code level can provide a great deal of information around how to balance public health versus open the economy, that’s number one. Number two, we have heard a lot of conversation about how this is disproportionately affecting the cultures of color, people of color in the urban settings. Our technology has the ability to identify those issues. And for public health officials to sort of think through what’s the best way to provide capabilities and services to those parts of the population. So, we think there’s a couple of incredibly important use cases that public health officials should leverage for.
PP: Well, John and Mike, it’s been such a pleasure speaking to you. Thank you so much for sharing your thoughts on this. I think this is a very important initiative. And I hope to get you, folks, back again on this podcast maybe a few months down the road when you have more learnings to share from the tool as work on the field and again all the very best.
We hope you enjoyed this podcast. Subscribe to our podcast series at www.thebigunlock.com and write to us at email@example.com
Disclaimer: This Q&A has been derived from the podcast transcript and has been edited for readability and clarity
About our guest
Mike J. Alkire is the President of Premier. As President, Alkire leads the continued integration of Premier’s clinical, financial, supply chain and operational performance improvement offerings helping member hospitals and health systems provide higher quality care at a better cost. He oversees Premier’s quality, safety, labor and supply chain technology apps and data-driven collaboratives allowing alliance members to make decisions based on a combination of healthcare information. These performance improvement offerings access Premier’s comparative database, one of the nation’s largest outcomes databases. Alkire also led Premier’s efforts to address public health and safety issues from the nationwide drug shortage problem, testifying before the U.S. House of Representatives regarding Premier research on shortages and gray market price gouging. This work contributed to the president and Congress taking action to investigate and correct the problem, resulting in two pieces of bipartisan legislation.
Jonathan R. Slotkin is the Vice Chair of Neurosurgery and Associate Chief Medical Informatics Officer at Geisinger. Dr. Slotkin is board certified in neurosurgery by the American Board of Neurological Surgery. His clinical interests include care for back and neck pain, as well as sports-related spine injuries, and he has particular interests in consumerism and the digital transformation of healthcare. His research interests include post spinal cord injury regeneration. Dr. Slotkin has expertise in spine outcomes, caring for degenerative and congenital spine conditions, spinal tumors and spine/spinal cord injury. He earned his medical degree from the University of Maryland, and completed his residency at Harvard University, Brigham and Women's Hospital. He completed his fellowship in spine surgery at New England Baptist Hospital. Dr. Slotkin is director of Spinal Surgery for Geisinger and also serves as associate chief medical informatics officer.
About the host
Paddy is the co-author of Healthcare Digital Transformation – How Consumerism, Technology and Pandemic are Accelerating the Future (Taylor & Francis, Aug 2020), along with Edward W. Marx. Paddy is also the author of the best-selling book The Big Unlock – Harnessing Data and Growing Digital Health Businesses in a Value-based Care Era (Archway Publishing, 2017). He is the host of the highly subscribed The Big Unlock podcast on digital transformation in healthcare featuring C-level executives from the healthcare and technology sectors. He is widely published and has a by-lined column in CIO Magazine and other respected industry publications.