Podcast Episode 5: Interview with Dr. John Halamka Transcript

Tom Andriola  00:09
Hello and welcome to Gradually Gradually Then Suddenly, a blog and interview series with leaders to discuss current issues and how data and technology are shaping our world. My name is Tom Andriola, and I’m the Vice Chancellor for information technology and data at the University of California at Irvine. My guest today is Dr. John Halamka. Dr. Halamka is an emergency medicine physician, medical informatics expert and president of the Mayo Clinic Platform, which focuses on transforming healthcare by leveraging artificial intelligence, connected healthcare devices and a network of partners. Dr. Halamka has been developing and implementing healthcare information strategy and policy for more than 25 years. Previously, he was executive director of the health technology exploration center, at Beth Israel Lahey Health; Chief Information Officer at Beth Israel Deaconess Medical Center; and international healthcare innovation professor at Harvard Medical School. He is a member of the National Academy of Medicine. Dr. Halamka, thank you for joining us today.

John Halamka  01:09
Well, absolutely thrilled to be here, because there’s no better way to spend this day then with my friend, Tom.

Tom Andriola  01:16
Fantastic. Well, John it’s always wonderful to have you and to be able to share your insights, you know, with the leaders of our community, John, for the years that I’ve known you, I’ve always admired you and followed you because you’re a thought leader on the evolution of technology, and now data and how it’s impacting healthcare. You know, you have such a unique way of taking the different expertise that you have, and looking through different lenses and pointing out both the challenges and the opportunities that we have and we want to bring out today is a few of the topics that you’re known to talk about. And really maybe go a little bit deeper than we normally do. So we’re going to start with one that I know is near and dear to your heart, the topic of interoperability. You’ve been vocal for many years about the barriers to healthcare interoperability, and contrasting that with what was possible, or what you made possible from the time when you were in Massachusetts. So I’d like to ask you today, where would you say we are now? And what opportunity has the pandemic given us to accelerate interoperability to where we ultimately want and need to be?

John Halamka  02:22
So, what a fabulous question. So maybe the first thing to ask is – what is interoperability? And why do I even pose that to you? So I was chatting with one of our great senators, in fact one of the senators who was a chair of the Senate Health Committee, and I said, ‘Well, what is interoperability?’ And that senator said to me, ‘every data element in your entire lifetime experience exchanged with every stakeholder in real time, for free’. And if that is your definition of interoperability, I think, Tom, we should probably just pack up and go home, right? Because one challenge with interoperability is people start with a presumption of data Nirvana. And that is, ‘what do you mean, you don’t have my aortic valve surface area in an ontology-controlled vocabulary-exchanged, you know, mechanism for everyone everywhere?’ Well, we will. But why don’t we start with problems and meds, allergies and labs, RADS and care plans? And oh, then we’ll move on to domains like cancer care, or from there, you know, certain aspects of telemetry that we’re gathering from our phones, or these new devices in our homes, and it’s his journey. So when people say, ‘Oh, my God, you know, we haven’t achieved anything in 30 years in interoperability.’ Well, ask yourself, can a user of Epic or Cerner or pick your EHR – I mean, I’m not being specific here – without huge expense, or huge effort, exchange at least a core data set of information about you? And the answer is a lot of the time they can, right, yeah, and that was certainly not the case a decade ago. But ask, as you said, about COVID. So here’s a challenge. Do anti-malarial drugs work? Well, I don’t know. I hear, you know, I read that they’re the greatest thing ever. Well, how are you going to measure that? Oh, you probably need to look at a cohort of patients who took them and look to see if they have some outcome, like reduced hospital stay, or maybe ventilator days were lower or something. Well, okay, how are you going to do that, given the state of interoperability because although we have a USCDI, a common data set for exchange, we don’t exactly have a national clinical trials network or registries for every disease. So what do we do? The COVID-19 healthcare coalition said, ‘Well, okay, if you ask that question, you need to pick a cohort, those who had taken anti malarial drugs. Okay, great. And well, what outcomes? Do you want to look at our hospital length-of-stay, or ventilator days or whatever? So this what we did. We went to Epic, and we went to Cerner and went to the EHR vendors and said, ‘Oh, do you have common ontology-controlled vocabularies for ventilator days?’ And they looked at us and said, ‘Well, no.’ ‘Well, how about this? Could you write a script that you could actually export ventilator days as a structured data element into a file?’ And I said, ‘Oh, yeah. Actually, we could. So the meaningful use era, moving everybody to electronic recording, gave us the potential of being able to say we’re going to create a, ‘are anti malarial drugs effective?’ data set, using a script and CMOS or CIOs at each institution can run the script. And in a matter of a day, you can get 5,000 sites producing a dataset that shows who took anti malarial drugs and outcomes. And then we can get that securely transmitted into a registry. So we actually did that. And from beginning to end, it took about two weeks. And of course, the question was answered, don’t take anti-malarial drugs. They’re bad. And so.

Tom Andriola  06:33

So John, let me ask you something on that, right. So here is, you know, what the pandemic gave us was the challenge question that we needed to find a way to respond to quickly to answer the question of the day because time was critical. How do we parlay that into a sustainable improvement in interoperability? Right? So how do we freeze the new normal in place going forward?

John Halamka  06:57
So here’s what people recognized. Did we get a result? We did. Was meaningful use in recording the data a foundation? It was. For the next pandemic, should we be writing scripts and SFTPing data sets? No! You know, what their recognition was, is that we need API’s that are not only patient level API’s, like the interoperability or the information blocking rules stipulate, but situational awareness API’s, like how many ventilators do you have at UCI that are unused right now? Oh, what’s your supply of convalescent plasma right now? You know, patients that are aggregating you, as we talked about certain outcomes based on certain drugs or whatever. So there’s this recognition, we’ve got to do this and be ready for the next mass casualty event by automating the process and making it frictionless. That has already changed people’s thinking, I can tell you.

Tom Andriola  08:03
Let’s talk about everyone’s favorite hype term of the day, AI. Right, all you got to do is Google AI. Look at the results, clear your calendar, because you’re gonna be reading for the next four or five hours. But I don’t want to talk about AI from the standpoint of if, I want to talk about how, right. Like most technology, I mean, the reason we call this series, Gradually, Gradually, Then Suddenly, is to recognize that technology usually emerges. And then next thing we know is becomes pervasively a part of our society or organizations, however you want to look at it. So if we work with assumption that it’s just a matter of time, how do you see the implementation of AI impacting healthcare today? And then what are you hopeful for, is the real promise for patients and medical professionals?

John Halamka  09:01
So this is a very complicated question. And so let me start off with just grounding your audience in the reality of AI. I was reading a press release two or three days ago from a very large company in the United States that said we have got a new AI system that can diagnose dementia from your voice. And you know that it has a sensitivity and specificity of 20%. Now, of course, anyone reading that wouldn’t quite be able to unpack that statement. But what it really means is flipping a coin is a whole lot better, right? And so do we have a Consumer Reports that said, we took an AI algorithm and ran it against a data set that was different from the data set from which it was derived, and looked at its fitness for purpose, and asked what’s the AUC, right? I mean, so let’s look at sensitivity, specificity. Understand, oh, if you said it had an AUC of a point nine is always pretty likely to give you few false positives and few false negatives if used for this particular purpose in this particular population. Well, that’s cool. So I can tell you Mayo Clinic is developing a warehouse of these AI algorithms. And they’re going to be transformative by being able to deliver specialty care at a distance to those who might not be able to access it today, in the areas of cardiology, radiology, radiation, oncology, neurology, and psychiatry, but we still all have to work on – if I took a 10 million patient cohort from those who visit Mayo Clinic, is it useful to patients at UCI? I’m not sure. And we’re going to need to figure that out.

Tom Andriola  10:56
Right, right. I mean, do you see that the implementation and utilization will follow the trend and other technologies that have impacted healthcare? Or do you see this curve looking very different?

John Halamka  11:11
I think we have to be careful in the deployment of AI to not over promise and under deliver, right? Because sometimes people say, ‘Oh, AI, that means my doctor is going away, right?’ And so what I can say is, well, no. But what it means is we will develop algorithms that augment your clinician in a way that helps them with decision making, and maybe streamlines the way they get through their day, reduces burden. And therefore it will be over time, part of every visit. It may be invisible to the visit, but it will result in higher quality, better outcomes and lower cost eventually. But it’s hard; it will take a while. And we’re going to have some failures along the way.

Tom Andriola  12:02

Right. Right. Did you feel we’re at the point now, where the easy cases – we’re solving the easy cases? Are we not even there yet, from your perspective, that the easy cases are still, you know, some distance away before we can trust in them as being part of the high quality standard of care? I mean, you at the Mayo, you’re pushing the boundaries on this, that some of the things that you’re doing.

John Halamka  12:23
So how about this, I don’t know if I describe all the use cases as easy or hard. But let’s just say this: you pick off use case after use case and discover if a pattern matching approach, which is what AI is, is actually useful? Well, it turns out in the world of evaluating ECGs, which is really all about pattern matching anyway, AI is extraordinarily good… or an auto segmentation of CT or MRI to help me identify a border of an organ. It’s actually really good. Evaluating fever, it’s not so good. So I think we’re just going to march through use case after use case, and hopefully the literature will suggest which are the ones that we should adopt sooner rather than later.

Tom Andriola  13:07
Right. So these AI predictive algorithms, maybe data, right, lots of debate right now going on around ethical use of patient data. How would you describe where we are in this journey? And maybe what questions aren’t we asking ourselves at this stage?

John Halamka  13:28
Well, so let’s break this down into several issues. So they require data. And a lot of the data we gather in healthcare is just bad. And that is, I have never looked at the University of California data. So I cannot say anything. But how often is race and ethnicity gathered in the course of an average registration event? And if you were to develop an AI algorithm, which is interested in reducing disparities of care, is that data quality appropriate to answer the question? Again, I know nothing about University of California, but I actually have over the last few months been starting to look at large data sets. And you know, they’re sparse. You’re looking at about under 50% accurate recording of race ethnicity, so data quality is key. And then ethical use of data. Does the patient want us to use their data for new cures and treatments? Or, what if it generates a product? How do they feel about that? So Mayo is revising its consent. And it can’t offer an infinite number of consumer choices on consent. But you can offer what I’ll call a sort of reasonable number, four or five, for which the patient could say, you know, I really don’t want to participate in research, or I don’t want this to go into a product, and this is my preference. And then respect that preference. So I think we just need to rethink consent a little bit. And recognize that California where you’re sitting, actually has a much more advanced way of looking at patient consent than most other states. So let’s get that right.

Tom Andriola  15:17
Well, I would think, California, I mean, it is out there, I worry a little bit that the pendulum of protections has swung, you know, a little far such that, you know, it’s going to be hard for us to utilize this data to aggressively you know, move forward, our ability to care for patients using some of these new technologies. That’s why I’m trying to always talk with people about where’s that pendulum swinging? And are we at the right place with the pendulum.

John Halamka  15:45
And what I hear – again, I’m a Boston-based guy working in Minnesota – is that the governor of California about a month ago actually signed a waiver that says, although California Consumer Privacy Act has certain kinds of things you need to do to inform consumers, for health care, actually, we’re going to follow the HIPAA standard. And so that it’s trying to strike this balance of giving them control where consumers need to be informed, but also not impeding quality care, because you need the data in order to deliver a coordination.

Tom Andriola  16:32
John, do you have a view on some of the conversation at the UN Human Rights level? I mean, patient data is just one type of personal data, but did this kind of movement to move that personal data should be treated as an intrinsic, human right?

John Halamka  16:42
I reflect back on, my father passed away seven years ago. And my father said, ‘I had multiple sclerosis. I was given drug after drug after drug. I want to donate my collective data to those who will go after me. Because I feel like I am a medical altruist. And yeah, if you ask some of my other relatives, how do they feel about donating their data? They would say, Oh, well, that’s my data.’ And so how about this, I think we always need to put the patient first, right, that’s a Mayo principle, and respect their preferences. But if we can make it as frictionless as possible, so that those altruists who want to contribute their data, can.

Tom Andriola  17:32
My last question for you today is something that we asked all our guests we call it the 1% model, and we have a lot of listeners who are emerging technology leaders trying to create their careers and create their impact. What two or three tidbits would you leave with emerging leaders for how to create the impact that they want in this world?

John Halamka  17:53
What a great question. My 40 years in the industry has been based on two principles. First, I always work at the intersection of two disciplines. And that is, as you know, I love the UC system, because I did the medical scientist training program, and I was the first person to combine medicine and engineering. At the time, my advisor said, “why would a doctor want to work in engineering? This makes no sense.” So work at the margins, right between two disciplines – that’s certainly powerful. 

The second thing I’ve always done is tried to put coalitions of people together for the benefit of all, never with a hegemony of any one organization. For example – I recently had a call with a leader in the telemedicine world, and we were talking about regulatory change, to reduce cost and friction to patients. And I said, ‘wouldn’t the natural thing be to just put all five CEOs of the telemedicine firms together with a couple of leading clinicians and a couple of policymakers, and then work together as a nonprofit, non self-interested group?’ He’s like, I never thought of that.  

So work at the intersection of disciplines, form coalitions for the benefit of all, and I think you’ll have a huge impact.

Tom Andriola  19:16
John, thank you. That was fantastic. Thank you very much for joining us today and sharing with us a little bit of your work and your words of wisdom.

John Halamka  19:25
Well, thanks so much.