OR WAIT null SECS
In part 2 of this week's Q&A, Medicomp Systems' CMO has some direct words for the hype surrounding artificial intelligence, the usefulness of natural language processing, and the potential of his own company.
This is the second of a two-part C-Suite Q&A with Jay Anders, MD, a former internist who is now Chief Medical Officer with Medicomp Systems.
A former physician with experience in tech, Anders approaches the industry with a detailed perspective and an inspiring confidence. He also has some direct words for the hype surrounding artificial intelligence, the usefulness of natural language processing, and the potential of his own company.
What are the biggest recurring challenges you face in your work with Medicomp?
Our main goal is to assist the physician at the point of care, and we run up against the fact that $30 billion plus was put out through meaningful use, and everyone went out a physician, and no one thought about asking the physicians what they needed or wanted to practice medicine. So one of our biggest challenges is convincing baked-in systems that we actually have a way to help a physician practice do their work and allow them to see more patients more accurately, and get out of their offices without having to spend more time thumbing through a lot of poorly designed software. We’ve had, recently, a few companies pick us up for just that reason: people were just not using whatever documentation system they had because the doctors hated it.
It’s trying to bridge between what a physician wants or needs to what an enterprise thinks they need to what a software vendor thinks they can provide.
And how do you get all those people on the same page?
In a multitude of ways. Usually, there’s a problem that someone wants to solve. Large cancer hospital in New York is not getting the data they need from their physicians using the documentation tools they have, in order to actually do analytics and research. They’re just not getting the data points, they’re not getting a good collection in their system to actually look at. So they’ll come to us and say ‘Can you show us what you have to see if we can improve this problem?’
A children’s hospital came to us and said ‘We’ve got a system that we’re just floundering in, our physicians hate it, can you help us with that?’ So we put in our documentation tool, and they wound up having a 20-30% increase in productivity with a drop in overtime, and all of a sudden the doctors are happy with what they have.
The hurdle really depends on what pain point the person you’re talking to has.
Do you see any resistance to adoption?
We see some resistance to adoption, but it’s really on a vendor level. Including us, all vendors are prideful, they think they can create what we have and what’s taken us 38 years to do, and they figure out that they can’t do it. They go ‘We’re not going to put your system in, because we’re not going to be relying on you to do that’ or ‘we can do this on our own’ or ‘we don’t really see a need’ or things like that, without ever asking physicians. When we go directly to physician organizations and present what we have, overwhelmingly the physicians we talk to say, ‘oh, we really like this, it could work for us.’ Then you start bridging the gap, and we’ve gone directly to institutions to do that. So, like I’ve said, it really depends on the hurdle.
Let’s talk about the future, what are you, as CMO of Medicomp, looking forward to in 2018?
Well, we’ll roll out in the next several months our Quippe Clinical Lens product, and I call that “the medical universal translator.” It’s an interoperability platform, but I firmly believe this: if IBM Watson, or Google DeepMind, or anyone else that requires good, solid, clinical, coded data to do what they do, we have the solution for that. And we have it in such a way that physicians won’t mind giving it up and recording it. Physicians right now don’t like their EMRs because they think it’s a billing system, and it’s collecting a lot of data for billing. We’re going far beyond that: we’re collecting clinical information that can be useful by a lot of different platforms.
We’re looking to focus it. We can make AI work. We can make IBM Watson work in healthcare, because we can give that system the concise, clinical, accurate information it needs to actually assist physicians. The other thing we can do is we can take information out of those systems to present it to a physician at the point of care, in a usable format, in context to what they are doing right then and there.
That is pretty, well, audacious. We like that here.
We at Medicomp like it too.
When you say “universal translator,” how does that work?
We can take data from any source and place it in a format that is clinically usable and operated on at the point of care and then returned in the exact same format that it was to be acted on by any system. So it’s not a machine learning system, again, it’s curated, but it can really give these machine learning systems what they need. That’s opposed to something like NLP: you hear a lot about natural language processing. I’ve been in this for a long time, and I’ve seen multiple generations of speech recognition and natural language processing, and I still haven’t gotten past this. If I say: “Your dog has breast cancer,” it’s going to pick up “breast cancer,” but it’ll miss the “your dog” part of it.
When the pickup of the language isn’t 100%, if those errors aren’t corrected, how long does it take to come back with something that’s usable? If you get it wrong, you’ve got it wrong, and there’s very little tolerance for that in medicine. I’ve seen some of these systems to take forever to come back with just what you’ve said and parse out the words that they think are important, so the poor person using it has to be a correctionist and tell the processor what’s important, so how is that helpful to me as a doctor? Natural language processing is nothing more than machine learning, that’s all it’s doing.
That leads into something I know you have feelings on, which is AI in healthcare. We’ll start with a basic, almost stupid-sounding question: in respect to healthcare, how do you define what is actually AI?
That’s actually a very interesting question. I think AI in healthcare is struggling to figure out exactly what its place is. And what I mean by that is it first got talked about in the areas of population health management and population analysis. ‘Can I really make a population of diabetic patients better, what are their patterns, what works and what doesn’t work?’ There was that kind of movement.
And then it went, I would say at least, almost off the rails. It went into ‘well, I think AI can look at a chest X-ray and diagnose that chest X-ray better than a human.’ That little project I remember with Watson where they looked at all of that. I think there isn’t really a definitive answer as to where AI is going to fit into healthcare.
The problem I have with it always being associated with population health management is that it’s all well and good for an enterprise to look at how they’re doing with their diabetics or with someone with chronic renal failure, something like that, what the best treatment for a particular cancer patient is.
As far as it goes down to the individual patient…physicians don’t deal with “populations.” They deal with single patients, in their offices. When I was in full-time practice, I had a population of 1, 30 times per day. That’s where I think it starts to break down. I don’t think it’s found its legs yet in healthcare.
It’s been a fundamental question we’re trying to work around: what is it, and what is it going to be? Does the buzz outstrip the meaning at this point?
There’s a lot more buzz than substance, that’s exactly what I would call it right now.