Looking at Data in a Different Way Is Key to Unlocking Healthcare AI’s Value — Part 2

July 18, 2019 - 9 minutes read

medical app development

By providing the right dose of innovation, artificial intelligence (AI) can change modern medicine for the better. But a change in perspective on data is sorely needed to realize AI’s true potential.

Welcome to the second part of our series on how looking at data in a different way can unlock more value from AI. In our first entry, we explored why it has been so difficult for the healthcare industry to embrace AI. We also examined how reframing the role of healthcare data could improve results drastically. In case you missed it, you can catch up on the first part here.

In this post, we’ll delve into why avoiding bias is necessary to create successful AI-driven care. We’ll also take a look at one of the biggest challenges blocking medical AI innovation: Access to large-scale data.

Bias Could Make or Break AI’s Future in Healthcare

Modern healthcare systems all suffer from one common malady — they are all reactive and, for the most part, only focus on treating ailments when they become too big to ignore. Emerging technologies like wearables and AI are helping to change this. But making the medical industry more proactive and preventative is a piecemeal process. Immense obstacles like data access and algorithmic bias stand in the way of researchers, clinicians, and innovators trying to accelerate it.

In the past few years, thousands of pilot programs, research studies, and medical development startups have sprung up with the hope they can provide AI algorithms that shift healthcare’s focus from reactive to preventative. And in controlled experiments, many of these AI applications show great promise. But when you account for more real-world variables, things can get a little tricky.

medical app development

Constance Lehman, MD, PhD is a radiology professor at Harvard Medical School and chief of Massachusetts General Hospital’s breast imaging division. In mammography, medical practitioners often have different opinions and interpretations of the same image. It’s a huge issue. But fortunately, Lehman believes AI could address it.

She explains, “Some structures in the body lend themselves well to precise measurements, which makes it easier to see when changes occur.  But in other areas, radiologists are looking at very subtle patterns of tissue structure. Mammography is probably one of the most extreme examples of that… Human brains just don’t do a great job of processing those signals, and that results in variation of interpretation.”

To solve this, Lehman has developed an AI tool that’s more accurate than humans at identifying patients with a high risk of breast cancer, which is currently the second most common cancer diagnosis for women in the United States. Humans can suffer from bias due to a variety of reasons; preconceived notions, patient history information, and even the financial impact their reading could have on a patient are all possibilities.

With AI, these biases are not present unless we want them to be. If we so choose, the algorithms can take this information into account, or they can assess the situation with the data and images at hand. Of course, striking this balance is easier said than done. And getting it wrong (creating biased algorithms) could undo any trust earned from medical providers.

medical app development

Lehman references a new algorithm which claimed to be better at identifying hip fractures than human radiologists: “… it turns out the model learned that if you have an x-ray that was taken with a portable machine, it’s more likely that you’re going to have a fracture on the image. That’s because patients who are in too much pain to get out of bed will have their images taken by portable machines, and those patients are more likely to have fractures.”

Avoiding bias begins with your implementation plans. Lehman says, “These algorithms are very, very intelligent – they’re designed to be.  And they will outsmart a poor data strategy every time.”

AI Success Depends on Large-Scale Data Access

Part of the bias problem in healthcare AI stems from the divide between academic researchers and companies racing towards commercialization. For Lehman, commercialization is not a priority over perfecting the AI tool. But for many startups in San Francisco, New York City, and other tech hubs, getting to market first is critical for success. As a result, their attention to building bias-free products may take a backseat.

Besides this, biases in AI are also largely related to a lack of diverse data. An AI model is only as good as the data you feed it. But getting large quantities of high-quality, accurate, and up-to-date data can be difficult. Calum MacRae, MD, PhD, chief of cardiovascular medicine at Brigham & Women’s Hospital, discusses this problem in-depth and how it affects the system on a large scale:

“One of the problems is that we’re still recruiting patients at an individual level instead of creating an environment where data contribution, in an anonymized and secure way, is an expectation of interacting with the healthcare system. It’s almost inconceivable that we study only a very small subset of individuals and then make inferences for the whole population. We should be doing the exact opposite. We should be studying large populations and then applying those insights to individuals.”

medical app development

Without meaningful data to train and validate your AI model with, all downstream steps are compromised. Lehman highlights this with an example: “It’s very easy to say that you’ve externally validated your models.  But if you’re doing the validation at the hospital down the street because it’s easy to get their data, you’re probably validating the model on a very similar population to the one that trained it.  That isn’t going to create a tool that will produce good results for everyone.”

To solve this conundrum, Lehman and her team are validating their breast cancer risk identification algorithm at many other facilities besides Mass General where it was created. “If we want our models to be robust across rages, ages, and different socioeconomic situations, we need to develop those partnerships and make that extra effort to incorporate other populations,” she explains.

Stay Tuned and Stay Healthy

Gaining trust and acceptance of AI will be imperative for it to flourish in medical ecosystems around the world. But to do so, biases must be avoided at all costs. Re-evaluating data’s role in the healthcare industry and giving AI developers access to it are great first steps.

medical app development

But as we’ll see in part 3 of this series, that’s only part of the equation — we’ll also have to change how data is collected and concoct and effective way to measure success. Stay tuned!

Tags: , , , , , , , , , , , , , , , ,