The world of healthcare is highly specialized; providers must acquire specific knowledge and expertise in a subset of medicine to excel in their chosen specialty. In contrast, the world of artificial intelligence (AI) development often takes a more holistic approach. Although AI can be focused on one area, it can find patterns and abnormalities from a large set of data taken from a variety of users and devices.
When these two fields collide, they can produce beneficial and innovative solutions. In a previous post, we discussed how AI is a huge helping hand for healthcare providers. Not only is it helping to address the needs of patients in underserved areas, but it’s also helping identify diseases faster and even assisting surgeons in the operating room. In case you missed it, you can read it here.
But with these new solutions come new challenges. Many of these difficulties weren’t predicted, while others have become intensified through the integration of these two fields. In this follow-up post, we’ll cover the challenges, biases, and cybersecurity concerns of integrating AI into healthcare.
Treading Carefully Into AI Innovation
Dr. Bob Kocher, MD, teaches at the Stanford University School of Medicine; he warns that “if we are not careful, AI could…unintentionally exacerbate many of the worst aspects of our current healthcare system.”
This isn’t a warning to stop integrating AI into health applications; instead, it’s a caution to tread slowly and carefully. Not everything is rainbows in AI or healthcare, and bringing the two fields together requires finesse and a critical mindset.
Often, when adding AI into any industry, many developers only see the opportunities and benefits without paying enough attention to the risks, security, and life-changing effects on users. But in order for AI innovation to usher in a smarter era for healthcare, strong consideration must be given to these potential pitfalls.
Because AI uses data across patient populations to draw conclusions, find patterns, and alert us of any abnormalities, we have to place quite a bit of trust in it. A human could never mentally aggregate that amount of data on his or her own within their lifetime.
But the quality of an AI algorithm depends heavily on the quality of its training data. If data is outdated, sourced from a small group of patients, or not enough for sufficient training, it can cause problems without alerting providers or developers of the issue.
If any of these possibilities occur, AI could incorrectly diagnose patients, and it’s up to doctors to take the technology’s recommendation with a grain of salt. It may be spot-on, but without an extensive review of the patient’s medical and family history, procedures, and past diagnoses, the provider cannot trust the AI with a 100% level of confidence.
AI algorithms can be racist, sexist, classist, and even ageist. They lack a fundamental understanding of humans and their brains, thoughts, and emotions. AI lacks compassion, empathy, and sympathy. Without a built-in criticizing engine, the AI won’t ever doubt its result, either.
In a study by MIT News, three AI algorithms had up to a 34% error rate due to skin-color biases. These facial analysis algorithms performed the worst for dark-skinned women, creating a considerable risk for missing diagnosis and treatment for skin cancer.
Dr. Rebecca Pearson is the Chief Technology Officer of Chicago-based ThoughtWorks. She stresses that biases in AI algorithms are mostly always unintentional. Many of these biases are a result of the actual biases we have in our current healthcare system. Therefore, economic and social biases in algorithms are troubling because they help perpetuate the cycle through future technology and gathered data.
To adequately address these biases in algorithms and the healthcare system, experts recommend that both doctors and AI developers take an interest in sociology, economics, family dynamics, and other people- and money-based fields. Cultivating a greater understanding and more sympathy for different groups of patients can vastly improve biases in technology and the exam room.
Just as any computer in a medical office or hospital needs regular security updates, so do all AI applications. Because these applications often contain massive amounts of sensitive patient data, cybersecurity concerns are a significant facet of AI technology. It’s easier for a doctor to not blab about their patients’ conditions than it is for an AI application to keep that information stored away perfectly safely.
As such, AI applications need regular maintenance; their code needs to be brought to the most up-to-date security and AI standards often. According to a study by ScienceDaily, AI innovation creates a threat to patients’ personal and health data.
AI Is a Tool, Not a Replacement
One thing is for sure: AI shouldn’t ever take over every aspect of healthcare; it can be a great supplement of knowledge and analytics for providers, but the provider must have the last say in the treatment, surgery, or diagnosis. We like to say AI’s intelligence is “book smart, but not exactly street smart,” and that’s where humans are needed to fill the gaps.
Ultimately, integrating AI into healthcare won’t be fast and it won’t be easy; hiccups will happen, and they will surely make us doubt whether this technology has a place in medicine or not. We’ll need to have multiple regulatory parties checking over AI health applications regularly to ensure risks remain diminished. We’ll also certainly need more research and case studies into AI in healthcare. More education for every stakeholder involved, whether that be patients, providers, or developers, is a necessity.
And as the co-founder of the Machine Intelligence Research Institute, Eliezer Yudkowsky, says, “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” We must not allow AI 100% control or authority over any aspect of the healthcare system until it’s proven itself multiple times.
Now that you’ve gotten an in-depth look at both the benefits and challenges that AI brings to healthcare, what do you think of this technology’s future in this field? Let us know your thoughts in the comments!
Do you have an idea for a disruptive medical device, but you don’t know where to begin? Dogtown Media is an FDA-compliant developer with extensive experience in bringing health tech innovations to life.
Contact us today for a Free Consultation!Tags: AI and healthcare, AI app developers, AI app development Chicago, AI in healthcare, app development Chicago, artificial intelligence app development, Chicago AI app developer, Chicago app developers, Chicago app development, chicago ehealth app development, Chicago MedTech app developers, Chicago mobile app developer, health app developers, MedTech and AI, MedTech app developer Chicago, MedTech app developers, MedTech app development, mobile app developers chicago, mobile app development chicago