AI and the Digital Healthcare Revolution

Shari Langemak is a physician, a journalist and a digital health strategist. Daniel Kraft is a Stanford and Harvard trained physician-scientist, inventor, entrepreneur and innovator.

45:31

Jan 19, 2017
6,519 Views

Artificial intelligence is disrupting healthcare, from the largest institutions to the most intimate doctor-patient interactions. What’s driving digital medicine, and what challenges might get in the way? Hear insights from our panel including Shari Langmak, editorial director of the physician network Medscape in Germany, Singularity University’s faculty chair Daniel Kraft, a scientist and inventor and David Bray, Chief Information Officer of the Federal Communications Commission.

Shari Langemak is a physician, a journalist and a digital health strategist. Her talks and lectures are mainly concerned with the opportunities and challenges of an ongoing digitalization and medical innovation. Apart from her job and speaking activities, she advises several startups. She works as Editorial Director of the German branch of the multi-language physician network Medscape. She graduated from Ludwig Maximilian University (LMU) of Munich with a degree in medicine, gained practical experience in London and Shanghai, and finished her PhD at the LMU’s department of Psychiatry. Furthermore, she completed an MBA program at IE Business School in Madrid.

Daniel Kraft is a Stanford and Harvard trained physician-scientist, inventor, entrepreneur and innovator. With over 20 years of experience in clinical practice, biomedical research and healthcare innovation, Kraft has chaired the Medicine Track for Singularity University since its inception. He is the founder and Executive Director of Exponential Medicine since 2011, a conference that explores convergent, rapidly developing technologies and their potential in biomedicine and healthcare.

Transcript

Michael Krigsman: ... And, it is  a great product. So, thank you to Livestream.

Today's show, we're talking about AI in healthcare, and we have an amazing group of people joining us. And, let's begin with Shari Langemak, who is with Medscape. Shari?

Shari Langemak: Hi, nice to meet you. I'm really looking forward to the discussion. Just a few words about myself: I work for the German edition of Medscape. I'm based in Berlin, that's why it's a bit darker in here. I'm basically very involved in the digital health scene here in Europe. I advise a couple startups and investors here in Berlin. And, digital health and especially AI is my passion. [Laughter]

Michael Krigsman: Fantastic, Shari! And next, in no particular order - I'm choosing by the order in which they're displayed on my screen - is David Bray, who has been on CXOTalk a number of times. And, is the Chief Information Officer of the FCC. David Bray, welcome back!

David Bray: Thanks for having me, Michael. And, it's always a pleasure to be here. I have to admit, while in my current capacity, I don't do healthcare, coming from the Centers for Disease Control in the past, and being involved with the bioterrorism perparedness response  program, I'm very interested into how we can make improvements as to how do we respond to disruptive health events both locally as well as globally.

Michael Krigsman: David Bray, thanks a lot. Last, but not least, is Daniel Kraft, who is doing too many things to count. And, Daniel Kraft, why don't you tell us about yourself?

Daniel Kraft: Hey! I'm a physician, scientist by background, trained in [...] medicine, pediatrics, hematology, oncology, and my academic role now chairing medicine at Singularity University, where we look at where are technologies heading, fast-moving, or exponential ones, and the ability to leverage those challenges from an education, environment, to healthcare and beyond. And I founded a program out of singularity university in 2011 called "Exponential Medicine," where we look at how we might leverage things like artificial intelligence, to low-cost genomics, to drones, to big data, and re-shaping and reinventing health and medicine across the spectrum.

Michael Krigsman: Fantastic! So, Shari, you gave a talk not too long ago, in which you describe some of the key disruptions, having to do with data and other things relating to healthcare. So, as an overview, do you want to maybe just share some of those thoughts with us?

Shari Langemak:  Absolutely! Absolutely. So, we are progressing so much in healthcare right now. We're white, I mean, we're collecting lots and lots data, more and more data and only due to studies that are conducted worldwide, but also due to on our mobile phones. So, everybody's using health apps, we get a lot of information from that. And with the help of this data, we will finally be able to treat diseases in a much better way. So, we speak about the era of precision medicine. So, a patient is not on the disease, or symptom anymore, but he is a person with many different factors we can take into account for his own treatment that can be the genome, his own genome, or a tumor genome, and even his microbiome or his treatment preferences. So, with the collection of all this data, we have, right now, we are finally able to find the right treatment for each and every patient.

But of course, this is one of the topics we'll probably dive a bit deeper into today: It's very, very hard to take all these different factors into account, because for a physician to keep up with the speed of information, with the speed of knowledge, is very, very, very hard. So, we need some sort of algorithm, some sort of machine learning to conduct and to help us to support our decisions.

Michael Krigsman: Daniel, what about this notion of algorithms and machine learning?

Daniel Kraft: Well, as Shari mentioned, we're in this exponential age where you want to get to true precision, personalized medicine, and give the right drug, the right therapy, the right prevention, diagnostics, and therapy. The challenge is it's really hard to connect all those dots right now, and any physician, pharma person, anybody who's trying to make sense of all this data, our brains are challenged, our brains haven't had an upgrade in at least a million and two years, but our wearables, our devices, our ability to compute is changing at an exponential rate.

So the challenge now for a clinician is to integrate someone's digital exhaustion, the wearable devices on mooring 3: the Withings Watch, the Apple Watch, and O-Ring, to integrate [...], to integrate the latest guidelines and publications. The average physician, at least in the United States, only reads journals 3-4 hours a month, and there's no way, again, to integrate into practice all that information. So we need machine learning, AI, big data, just to synthesize some of that, to bring the right diagnostics, therapy, for that patient at that point of care. And that's sort of the promise of [...] AI, machine learning, and big data as for now, in an era where a lot of it is becoming available, but how to make it actually useful, provide better outcomes at lower costs is still a huge challenge.

Michael Krigsman: And David Bray, what about the policy implications, and the intersection of healthcare, AI, and what do we do about all of this?

David Bray: So, wearing on my Chief Information Officer hat, what I love about what both Shari and Daniel were saying is really about how we can use AI almost to augmented intelligence for both the physician, as well as the patient, because as they said, we're in an era of exponential data, in terms of new therapies, but also everything that's possible that can determine your health outcomes. And so, policy-wise, I think we need to think about, for the patient, how can we think about providing choices, so that they actually have informed choices about both what they want to know, or what they want to have now with their data. And some of us may want to have our data more shared, because maybe it means better health outcomes, but others we may not want to know everything because we're not ready for that sort of carrying into the future in maybe nine, maybe fifteen years from now, based on our genomics  we may have this complication in our health. And that means being an informed choice.

Obviously, I'm not a physician, I am a CIO. I do think we need to think about how do we address the data, how do we address the sensors that are collecting it, and then finally, how do we make sure that you have a locus of control as to what's done with your data, and what algorithms are running with your permission and which ones aren't. And I think that's very key.

Michael Krigsman: Shari and Daniel, you are both physicians. And, what about the impact on medicine, and the role of the physician, and how can physicians make use of this data, and where are we in the process of having this data, and then being able to use it in machine learning, and AI, and can you give us practical examples? I'm throwing a lot at you all at once.

Daniel Kraft: Well, I'll start. I think you start with the question about algorithms, which may be a bit different than AI. A lot of healthcare is, you know, in terms of what we do as physicians, is look at a patient, they're complaining of pain when they pee, and so we check their urine: Is is positive for nitrates and bacteria? We'll then maybe it's a urinary tract infection. That doesn't take going to medical school to learn. A lot of basic common healthcare issues can be triaged for these partially diagnosed with pretty straightforward decision tree charts; and they're now - not an explosion - but some examples of chatbots, or simple, early AI, where it can ask you about your symptoms, and can tell you is that belly pain likely to be appendicitis, or just indigestion?

So I think there's sort of the simple side of algorithms, which in many cases, where there is not a lot of medical care; many parts of the world don't have access to physicians, or it's expensive to reach them, we can do a lot with simple, through your SMS or your smartphones, provide algorithmic-based, especially triage, and health education. And then we get farther up the spectrum. When I'm seeing a patient with a urinary tract infection, instead of just giving them the standard antibiotic, maybe we'll have other information about their renal function, their BMI, what dose you might want to give based on their pharmacogenetics, from a 23andMe-type profile, or their full genome, which is coming down to a thousand, or maybe a hundred dollars this year.

So you can go from simple algorithms, pick up the urinary tract infection with an algorithm, maybe even call in the prescription by a bot, and deliver it by a drone; but then maybe get much more personalized, to pick the most appropriate antibiotic that's safest, that will give you the best outcome based on that individual, the best information from the CDC, all available at the right time and at the right place, and do that in the right functional and low-cost manner. That's just one small example.

Shari Langemak: I might add to the role of the physician. I think it will change significantly in the next years already. So every time I'm at a medical conference speaking, the physicians are actually afraid that they would be replaced. But, I highly doubt that this will be the case, at least in the mid-term. It's rather a tool for the physician to help him to make better decisions. They want me to replace ... We know from very recent studies, that while AI in some cases is better at diagnosing, for example, a rare disease in certain cases than the physician, the best outcomes are when we have when physician and AI work together.

So, you can think about it like you check, or you type in the symptoms, you check with the genome, some sort of basic lab parameters, and then you get a recommendation or from AI, and then still the physician needs to check if that's really the case, and if you trust this recommendation. I think this quality check is very, very crucial. We won't get rid of it in the next years, for sure.

David Bray: And I actually have a question for both Daniel and Shari, in that for Daniel, my question is do you find that you feel like you have a locus of control over the data that's being collected from the different sensors you're wearing. And then for Shari, I guess the question is: As a physician, what would be the best way for algorithms and/or an AI to present new information or novel information to you in a path that you could actually absorb, and actually integrate into your practice and care?

Daniel Kraft: Well I'll start. I mean, great question. I think today, clinicians are still overwhelmed by having to spend half their time typing electronic medical record notes, [at] double the time they have face-time with their patients. The flow of data from wearbles or Omex, etc. is not really integrated into the workflow of most clinicians, at least in the United States. And we're just starting to enter this era from very intermittent data being very reactive. You know, waiting for disease to happen, having broken feedback loops from blood pressure to blood sugar, to be much more continuous with their data, and much proactive as individuals, as physicians, etc. So, right now, all of these consumer devices; I was just at a consumer electronics show two week ago; there's even more from tracking mothers and pregnant women, and the baby in-utero, all the way to tracking the sunlight exposure in your sleep.

What's just starting to happen in the last years, is that the data can flow, in my case through my Apple Health Kit in my Apply Phone into my electronic medical record at Stanford, where my physician can start to see that and vet it into the EMR. But right now, he may have 2000 [...] who want to log in to look at everyone's stats, and blood pressure, and other data. We need to have the AI/machine learning sift through that information and present to the doctor the five patients in his practice who you may need to call and bring in today, based on their blood pressure, their sleep data, maybe their respiratory rate, picked up by their mattress.

So today, we have a lot of sensors, this Internet of Things is blending into the internet of medical and health things, but still the docs aren't really connected. There's not a lot of interoperability, and the core clinician doesn't want to see more raw data. It needs to be synthesized, so it's actually useful in a timely way, and that the clinician is rewarded for doing this. Can they bill for doing an e-visit, for looking at the data? As cherished [as it is], I think the role of the physician is key; we're not going to get replaced by AI, but it's going to augment our skills, and enable us hopefully to be much more proactive with our patients from keeping them healthy, to thinking of a disease earlier, and then managing chronic diseases in smarter, evidence-based, and feedback-loop ways.

David Bray: Excellent!

Shari Langemak: Yes, and so your question was about how should it be presented to physicians in a clinical context, right? I think we already see some small examples of that. I mean, the very basic AI in medicine is basically that every software checks if there are interactions between medication. We have that already for a long time, and some sort of AI and machine learning as well. The other implication: physicians are already working with AI in medical imaging. So, they get some suggestion what diagnosis is behind a CT image or some sort. But we will see more and more AI that suggests some diseases, and I think we have to make sure it's less about how this data is presented, and how the possible diagnosis is presented. But educate physicians about what their limitation may be of AI is, and that they should always question - not maybe always - but in many cases, question the background of this, because AI is basically based on studies on data, and it's only as good as the data feed. So, if we don't have good study data, or if the angle of research changes, also the recommendations can be fought.

David Bray: I'm hearing my computer science background heart sing, because the mantra of "garbage in, garbage out," we need to keep that in mind when we do algorithms and AI.

Daniel Kraft: On that point, I think what's exciting about this age is that hopefully the future of healthcare that we practise medicine isn't going to be evidence-based, meaning we look for double-blind, procedural controlled trial of a patient just like the one I have, which usually isn't the average patient, but much more practice-based medicine: mining data from all the Epics and Cerners, and NHS data , say for David, who's got this particular condition with this genotype, this looks like it will be the most efficacious therapy, and that we can continue to sort of crowdsource that information. There's still a lot of data-blocking: pharma companies, EMRs, hospitals, academics don't talk to each other, but if we can start to collect information, mention crowdsourcing earlier like when we drive with Google Maps and Waze, we're used to sharing some private information: our speed and our location. In exchange, we get a map of the traffic, and we can adapt around.

We can use that same mindset across healthcare, and that the information isn't just garbage, but it's synthesized from thousands or millions of people, and we have our own sort of healthcare map: A GP has to guide me in my healthcare journey, or  a patient I may have, and those maps keep getting refined, and when there's a traffic jam, you're saying we can learn to route around this. That's an opportunity to improve the data sources, because the way we collected data, did clinical trials, is set to be shifted dramatically if we leverage some of these tools with the right regulatory reimbursement and other mindsets.

Michael Krigsman: But how do you make this happen, because even if it can be proven scientificially, medically, to work, and lead to better outcomes, there are so many entrenched interests - political interests, physician interests, economic interests, insurance interests. How do you ... You're talking about an overhaul of the way not only that we practise medicine, but the way we think about it. It seems like an almost impossible task.

Daniel Kraft: Well, there's a lot of misaligned incentives in healthcare that many healthcare systems; in the US, there's hundreds of systesm. Kaiser or Geisinger can operate differently than a fee-for-service place. So part of it is aligning incentives as we're moving from fee-for-service healthcare, at least in the United States, to more value-based care, we're going to be rewarding technologies and systems that give us better outcomes that we can measure, whether it's keeping someone out of the hospital with heart failure, or doing a smarter, earlier job of diagnosing a cancer, then using, like IBM Watson's already done, to help figure out the best sort of therapies for particular lung cancer patients. So, no oncologist could synthesize from all the new molecular markers, and different combinations of drug therapy.

So, does this sort of lie in the interests? It's not going to happen everywhere in some systems. Germany, things can happen that can't happen here. NHS has great leverage. The VA can do things in a smarter way. So, we need to align those interests. It's not going to happen at once.

Michael Krigsman: So, there's an alignment of ... So there's a, say, coincidence of technology on the one hand, with all of the social, the economic, political pressures, constraints, objectives, on the other hand. And so, where's that intersection between the technology of AI and the data, and these other factors?

David Bray: So, Shari, I'd be interested in Germany's perspective, because if I'm correct, and correct me if I'm wrong, Shari, you actually have tougher privacy laws than we do in the United States, is that correct?

Shari Langemak: Yeah. Actually, we struggle quite a bit here in Germany to implement new healthcare solutions, to implement ... Startups really struggle a lot to find new solutions, because there are a lot of data protection rules, but we have many, many laws that prohibit innovation here in Europe and especially in Germany. I have kind of mixed feelings about that, because of course, I think especially when it comes to AI, especially when it comes to big data, data security and high privacy laws are very, very important, but you must ensure at the same time that they don't prohibit innovation, because it's so important to reduce our cost in healthcare. As we all know, we are barely able to cover all the rising costs of healthcare right now, especially drug prices, and the rising cost for people with chronic diseases. So we must find a way to allow innovation at the same time, and still protect the individual.

And I think, especially AI, one way to do that is to have transparency, basically. I think companies must show what algorithm they use on what data the recommendations are based, so that we can still, afterwards, check if the recommendations are valid if we might need to change the algorithm, or all things like that, basically.

David Bray: Alright. And one of the things that I would say from my own experiences as a CIO is you don't want to be top-down when you're dealing with many different players. In fact, it's exactly what Daniel said: If you want to think about what are the incentives that will help encourage people to find their own paths in the direction we want to go, and so if the direction we want to go is holistically treating the patient, making sure it's outcome-based, and actually trying to make sure that we're thinking about how we make sense of this data overflow, then the question for us is, "What are the incentives both in the private sector and the public sector that will encourage innovators to move in that direction?"

Daniel Kraft: I mean, I think this whole value-based approach is going to drive the incentives. If I'm a physician, and I'm not paid to see more patients and do more procedures, but to keep you healthier, to glean better outcomes, I am much more likely to use the AI agent to help me pick the right drug and dose, because I'll hopefully get rewarded in some form, whether it's just pending on patient outcome, to a bonus at the end of the year for having patients with good blood pressure control, or being picked out before they end up needing hospitalization. It may be even in a few years malpractice not to use the AI in doing diagnosis and therapy. We all know the issues of medical errors, they make the equivalent of a 747 crashing every week or two.

A lot's happening in a hospital setting. We still treat patients based on our old experience around what journal article we just read, and I think as we're incentivized to get better outcomes and rewarded in smart ways, both financially and otherwise, it's going to drive the adoption of these.

And it's going to be disruptive of certain fields. Dermatology, radiology, pathology are all based on pattern-recognition. A lot of what a physician does, is learn, "This is what a sick patient looks like. This is a constellation of symptoms." But we may not catch that Zebra or we might miss something, and the more we can leverage this and again, combine it ... That won't replace the clinician, but using a combination can give us hopefully better outcomes, and enable a primary care doctor in rural Rwanda, using an AI app to do skin exams, pick up early ebola, or other things that might have global health implications as we're all getting more super connected and the world becomes more globalized, including issues that David knows well: bioterrorism.

Michael Krigsman: Are there policy, or let me put it this way: What are the policy implications? You know, there's legal implications, for example, the legal changes that will need to take place to support this; other types of government policy as well.

Shari Langemak: They have started, so I think we have to answer a couple of questions: How do we ensure this quality control we have talked about? To what extent do we want to use AI, and am I allowed not to want to use AI? As a patient, can I say I don't want to have AI be used in my treatment? So that's a very tough question, right? Because maybe the outcome isn't as good, and this patient might cost a lot of money to our healthcare system. Another important thing is who's responsible if something goes wrong, right? If AI makes a recommendation and it's the wrong one, is the physician in the end responsible, or the company?

So, I think the only way to answer these tough questions, because most of them also have a very critical, ethical background, can only be solved in a public discussion somehow, at least in a discussion where all the major stakeholders involved in bringing their view into the discussion.

Daniel Kraft: Essentially, I don't know that we can run medicine policy by voting or debates, and folks who may not have a good picture of what practicing medicine looks like, or where AI may be in a couple years. I mean, AI is moving pretty quickly, and we often in this exponential age don't appreciate what's going to be here in a couple years, and how powerful it might be. I agree, who's liable for this information? Just like with self-driving cars, eventually, someone's going to get hit by one, and who gets sued? It is the self-driving car software? Is it the person who owns the car? There's so much data flying out, so I'm wearing a little patch right now trying to do the live demo that's streaming my vital signs to my smartphone, and I can literally be sending to David and Sheri my 24/7 EKG, which you might see here. ... Hope it looks okay in my [...] out there.

David Bray: You look very relaxed.

Daniel Kraft: Yeah.

Shari Langemak: Yeah.

Daniel Kraft: Who's liable for looking at that data in real-time? What algorithm parses that? [...] your rhythm is going on here, not just your EKG, but your sleep data, and beyond. I think we need to be careful about that over legislating this, and allowing it to sort of have some room to expand, but in balance, the malpractice laws at the same time.

Michael Krigsman: By the way ...

David Bray: I was just going to say, as CIO, I would love to do experimental pilots, and so my question for both Shari and Daniel: If you could design an experimental pilot that could be done this year to show people what's going to be possible in this era of patient-centric healthcare and AI, what would be the experiment you would design?

Daniel Kraft: Well, what I think is coming faster than we think is in this sort of hyperconnected age to, you know, all these wearables are becoming commoditized. It's how we make sense and synthesize the data. So, I like to use the example of our modern cars, had three or four hundred sensors. And we don't care about any sensor, but the AI software gives you a "check engine" light with some exhaust. Hopefully, that means you're proactive and you take your car to the mechanic for a blown gasket.

Could we start to see some pilot systems which look at your connected home data, through Alexa, through your smart mattress, through your wearables, understanding your own mix, to kind of give you a 24/7 surveillance of your particular exhaust, and your baseline information, start nudging you in the right direction to get you on the path of health and wellness, or to manage, let's say, expensive patients like Type 2 diabetics which end up costing the healthcare systems, with a lot more morbidity, and challenge, and suffering as well. That might be a little pilot.

How could we take a systems medicine and a systems biology approach, and connect these dots? We're starting to see some companies do that, like Arivale, founded by Lee Hood, or Longevity, Inc., or Preventure.

Michael Krigsman: I want to remind everybody that you are watching CXOTalk, and we're talking about AI and data in healthcare. And right now, there is a tweet chat going on using the hashtag #cxotalk, and you can ask your questions directly of our truly amazing panelists today.

Daniel, I have to ask you, what product is it that you're using that shows your EKG in real-time?

Daniel Kraft: Oh, this is ... I'm wearing a patch from somebody called Vital Connect; this little sort of band-aid sized element that's disposable. These are moving into the hospitals now to monitor patients who should not be on monitored beds. I can wear this for about a week. And again, it streams to my smartphone EKG, temperature, stress level, other elements. And, you know as an example, an intense security in the level of streaming data that could come up on my body 24/7, it's a bit of a "So what?" unless that data can flow, let's say, not just into my electronic medical record, but a smart medical record system, that it can be parsed with machine learning and AI so we can figure out what changes there might be, that might be evident to what I like to call the "predictalytics," predicting I might be heading in the wrong direction, or it can nudge me, or move me back to a good direction.

So, there's several of these types of smart band-aids coming out. So we can start to measure all sorts of things. The challenge is what we do with the data, because a lot of it is how do we blend it, put it part of the workflow for and overwhelmed clinician, who doesn't want some other data flow they can't manage, that he's liable for.

Michael Krigsman: So simply having a set of products that generate all kinds of data is not useful, unless we have that entire chain built in, involving the right data sources, the right software, machine learning/AI software to parse through that data in a meaningful way. We have clinicians who know how to use that data, and we have a regulatory and practiced environment that accepts the use of this data, and that has parsed through the risks that are involved, so that it's legally safe for physicians to make use of this data.

David Bray: Well, and actually that's what I was going to ask Shari, because I know Germany, again, having tougher laws. As a CIO, what would be the experiment that you would want to design for medical practice and data outcome, and how would it be different say in Germany, versus a different [country] because of the laws?

Shari Langemak: I think the biggest challenge here in Germany is how to prove that innovation actually has a big impact on the outcome, and to fight reservations from our physicians here; not only physicians, to be honest, but from many, many Germans. So, I think we need ... So, the thing I hear the most of the argument against innovation, against AI, is mostly we don't have enough data. So, I would like to see an experiment where physicians, and AI, try to diagnose something here in Germany, and compare it with the physician alone, and show what we can already do, how we can already improve the outcome for certain types of patients to show that there's actually a huge impact on innovation in healthcare, and a huge possibility to reduce costs for our healthcare system.

Daniel Kraft: Well that's starting to happen. I've been involved with the X-Prize, and designing a new X-Prize: the Medical Tricorder X-Prize, which is to build a sort of home dignostic device to consume, blended with AI, is an example of one medical tricorder that was part of the X-Prize competition from Scanadu, which entered a clinical trial; I think these are closing now. But, you know, usually you can collect this data at home, which used to require going into a clinic, or an ER, or an intensive care unit. Now that data can go through your phone, AI can start to look at, "What's Daniel's normal baseline? How's that changing? If I'm getting a really sensitive infection, how can this help pick that up?", communicate that to a medical team in a smart, proactive way, and so that's ...

And so, I think part of this future is how and where we collect this data, how the consumer, or patient is empowered to own their own healthcare data, share it when they feel like being a data donor, a lot of new things are going to come through these smart sensors, and clincal lab tests as well, not just vital signs.

Michael Krigsman: What do we need to do, or what do the stakeholders need to do, the public sector, and the private sector, in order to encourage all of this innovation, and create the right type of environment in which in can flourish?

Daniel Kraft: [Laughter] Well I'll take one example. I think it often comes down to financial incentives, right? If you, as a patient, can pay less for your insurance premium, if you agree to go through an AI chatbot before you call the triage line or show up in the ER, that might encourage some adoption. We just saw, in San Francisco, launched yesterday, a new company called "Foward," funded by Kleiner-Perkins, Google Ventures, and then [...], who are just trying to make the clinical practise of the future. I think that's under 7-8 hours to a month, you can have unlimited access. And when you go there, they have big touchscreens where you can display the data, they apparently are using AI, and it seems like they listen to patients in the clinic and help provide suggestions to the clinical team. So, I think we're seeing early evidences of this. And you can get to that smart, concierge-type practice, at a very low price-point, under $80 bucks a month, unlimited access. That's going to be disruptive to regular payers, regular hospital systems, and physicians, and they drive a lot of this adoption, especially when you see you're getting better, smarter care, based on your own data, your own [...], your own behavioral type, and that the user face matches you.

Another AI element is these smart coaches, because you can diagnose a patient or prescribe them a therapy or other intervention. Half the people don't take their medical intervention. Now we're seeing these AI chatbots and coaches that can track you and incentivize and nudge patients, hopefully in a more personalized way that is part of this blend of AI and machine learning, and user interface that will help drive smarter and better outcomes as well.

Shari Langemak: I can only say here in Germany, we just have recently started to [explore] this potential in e-health and digital health. We just have introduced e-health law last year, and it shows what we basically need. We need incentives, financial incentives, and financial penalties to move a very old sector into the future and help them to adapt to these changes, because we tried it for a very, very long time. Anybody who listens from Germany probably knows how long it took us to start with the electronic health records. It's really a shame. [laughter] So, we really need these financial incentives, at least in here, and we really need politicians who are better informed about technology. And I see that there's some discussions starting; more and more politicians try to talk to young entrepreneurs, starting to talk to companies, go to the Valley, see what's up there so we try to catch up now and hopefully, we're there soon as well. [laughter]

Daniel Kraft: Maybe I'll pitch this quick question to David. You know, we have 4G or something, and now 5G and 6G are coming, which doesn't mean our smartphones and our wearables and our digital can exhaust can be streamed at 100x the data stream. So hopefully, pretty soon I think. So we need the SECs of the world to help enable this data to flow, whether it be a need for a data pass for healthcare data; all these privacy issues are critical, how do we layer things like blockchain on top of this to make data more shareable and safe. Where's that heading?

David Bray: Right. So recognizing I'm not a commissioner of the SEC, nor am I Congressionally-appointed. I can say you're right; 5G and more in the industry is around the corner, and it's going to start releasing in stages. 5G is interesting, because you can actually do structured data elements within the signal and the message itself, and so you can actually say this part of the signal can be shared for these purposes, or I'm a type of first responder, I'm a type of doctor and things like that, so we can even have ad-hoc mesh networks. And so, it will be interesting both from a community perspective as well as a hospital perspective. How does that include the broader ecosystem of care that includes physicians, also includes first responders that are first at the scene for your health. Maybe there's a burning building, and you're unconscious, but your phone is still active: can they find you so that they can bring you to the hospital and things like that.

Also, we need to think about how we can use this to have smarter transportation of people's data, because as you know, the data's growing so large. If we were to port that file everywhere you went, that would be voluminous. And so, we need to start, like you said, thinking about how do you inform sharing? I, particularly, am interested in; and I try to tell people, and I try to talk about public services as opposed to just government. I think a lot of the innovation is really going to come from individuals in the public that are caring about this issue, whether they themselves have an affected family member that they want to get better healthcare for, or they're just passionate about making some innovation in this area, as well as public-private partnerships that are thinking beyond their own bottom line.

I do wonder if the world is changing so quickly that the traditional approaches to top-down addressing of these issues will not succeed, and so the question is what is an informed approach that does protect the consumer, does protect the industry, but at the same time, keeps up with the speed of change that is expected to happen. And I don't have any easy answer, other than to say what we did here at the FCC, when we had the FCC speed test app, is we made it open-source; and this was done in late 2013, and you can imagine given the events of late 2013 to say, "Hi, I'm with the government. Would you like to download an app that will monitor your broadband connection?", that probably would not have been well-received, except we made it open-source. You can see by design, we weren't collecting your IP address, and by design, we didn't know who you were in a five-mile radius. And so, maybe there are things that require public trust. We can expose what the algorithm is doing, or expose what is being done with the data, so you can see that we're doing privacy by design wherever possible, and then giving you informed choices as to maybe you do want ot share more data because you think it will help inform the cancer clinical trials that will make your loved one healthier, or you may choose not to do that because you value your privacy more than whatever other outcome.

So, I think we need to rethink how we've done public service, in the same time we're also thinking about how we address healthcare and other things like this. So it's going to be a very interesting challenge, and that's why I'm really glad that people [...] like Daniel and Shari are leading the way from the physician perspective, because really we got to let the experts lead the way as to how we address these issues.

Michael Krigsman: We have just about five minutes left, and we began this discussion talking about the disconnected pieces. I think Daniel and Shari, you said the dots need to be connected. And it seems like this is the fundamental problem, hearing you talk, because you've got the technology providers, you've got the physicians; there's all of these people working on it. And Daniel, you mentioned earlier that it will ultimately be financial incentives that enable the chain to be connected, that aligns the regulatory environment, and so forth. And so, in the last five minutes, I'd like to ask the three of you for your advice, both for the public sector, and for the private sector, regarding how do we create the environment where the dots can be connected, and we have a context to enable this to go forward and be used in practice? What advice do you have; specific advice?

Daniel Kraft: For example, I was in Washington this summer with Vice President Biden as part of the Cancer Moonshot summit, and a lot of the focus there is to do ten years of progress in five years of cancer, particularly in therapeutics. A lot of that was aligning the ability to share data, and catalyze that between pharma and academics, and hospital systems, speed up IP, intellectual property; speed up the FDA processes for new cancer drugs. Some of the lessons do get driven by policy and convening, getting everyone to agree to collaborate and connect the dots. The big HIMSS conference is coming up next month, where there are still all these issues about interoperability; a lot of systems just don't talk to each other, they're not incentivized to, so that can be driven by policy.

And again, on the smaller scale, every individual can start playing with little AI chatbots, and bring them to their clinician, and clinicians are out there starting to say, "What are tools that exist today, even if it's not paid for, or it's [...] that I can use to start enhancing my practice, or my touchpoints with my patients; so, not waiting for the future to arrive. Again, the future's already here, not just evenly distributed, a famous quote, and it's up to us to not just predict the future, but to create it using some of these new tools, and to catalyze that differently. In Berlin, which is an amazing startup culture, in Silicon Valley, and in other parts of the world, which, where everyone isn't wearing an Apple Watch or wearing the Google Glasses about you.

Michael Krigsman: So we have a question that's related to all of this from Twitter, and Joanne Young, who is a very experienced Chief Information Officer herself in higher-ed, is asking, "Does the promise of AI include halting, or reversing cost escalation?" So that, obviously, is a key part of it. Anybody want to jump in on that before we finish off with the remainder of the advice?

Shari Langemak: I think I like to take that question. So, of course, there is a fear that we have increasing drug prices, and even more personalized care through AI, because patients get very, very specific treatment, which is very costly. But, AI can help us to reduce costs in many ways, not only by reducing the rate of complications, but also by helping pharmaceutical companies to reduce the time it costs them to bring a drug to the market. First of all, we already have started working on recommendations for pharmaceutical companies to follow some sort of direction for a new drug, so they don't have to spend that much time on many different types of drugs to see if it works or not, and it also helps, of course, in the end, to get an FDA approval, for example, because we can use the data, use AI to see if a drug is safe or not. So, in the end, I strongly believe AI will help us to decrease cost in healthcare.

Daniel Kraft: And when, as the incentives shift, like right now in the United States, for Medicare, hospitals don't get reimbursed if a heart failure patient comes back within the first month, and there are several companies that are setting out mass networks in the patient's home, to look at the scale, their blood pressure, their activity, and have that sort of early "check engine" light in red, green, or blue; red, yellow, green; to help identify the folks that are moving from green to yellow before they get to red. And so, we can be much more proactive using this data, using the algorithms to pick up that digital fingerprint of someone falling off the deep end on heart failure, or mental health issues, for emphysema, and that can lower costs. We're picking up someone who's pre-diabetic, before they become diabetic, and putting it into these programs, like one from Omada Health, which is a digi-ceutical social network platform that can trim people around through behavior change, [...]. So, you can definitely lower cost by using the data in smart ways and leaving early signals that can change the course of a disease path.

Michael Krigsman: I love this idea of the digital fingerprint, and AI being used to interpret that data.

We're just about out of time. Shari, do you want to share your advice for making this all come together, and then David, we'll turn to you.

Shari Langemak: Absolutely. I can only agree with Daniel, but the future won't wait for us, and especially when we're talking about Germany or European countries, I hope that we start the discussion, that we start to get informed about these new technologies, and start implementing new laws that will allow innovation, and that prevent the risks that come with it. It's a topic that is not easily touched here in Germany, we sometimes believe that by avoiding the topic, we find a solution, [but] it's not like we can prevent innovation. So, I really hope that the discussion starts now.

David Bray: So, I will conclude with three thoughts, and enthusiastically agree with both Shari and Daniel. First, as a CIO, when I arrive, we are spending more than 85% of our budget just to maintain the legacy systems we had, and while we didn't see a budget increase, we were motivated to move to public cloud and new technologies, because we could see an efficiency at scale. So, I'm hopeful with AI, even if there's not necessarily an overt financial incentive, it could be just a legacy way of doing things as later shown it will be so expensive, you have to move to it.

Two: When we made that move, and people thought we were crazy to do it in 2013-2014, we need safe spaces to experiment. I mean, yes, there's certain parts of medicine that you have to keep going well, and keep the train going on time, but creating those safe spaces will be the key to show what's possible to bring everybody else along.

And third, it's going to take all of us. It's going to take physicians, it's going to take IT professionals, it's going to take public. This is a massive endeavor, and so I look forward to seeing what sort of almost ecosystems of thought, and action, evolve as a result of this.

Michael Krigsman: Alright. Wow, this has been quite a discussion. You have been watching Episode #213 of CXOTalk, and we've been discussing healthcare, and digital data, and AI, the regulatory environment, and a lot of other topics. Share this with your friends, because the transcript will be up on the CXOTalk site early next week, and it's a rich treasure trove of material. I'd like to thank Daniel Kraft, David Bray, and Shari Langemak, for spending time with us. And, we'll be back next Tuesday for our next show, and then we'll have a show the following Friday. The next Friday as well. Thanks so much, everybody. Thanks for watching. Bye-bye!

Published Date: Jan 19, 2017

Author: Michael Krigsman

Episode ID: 411