You are opening our English language website. You can keep reading or switch to other languages.
07.07.2022
9 min read

AI in Healthcare: Significant Potential but Not Without Obstacles

Despite AI’s popularity, the technology is still not widely integrated into clinical practice. However, it has the potential to provide information about a patient’s current health status, like red blood cell count, cholesterol, percentage of fat, and by how many seconds last night’s beer will shave off their lifespan. The question is: how do we approach AI’s broader adoption?

AI in Healthcare: Significant Potential but Not Without Obstacles

Article by

Anton Dolgikh
Anton Dolgikh
Valentina Endovitskaya
Valentina Endovitskaya

AI is everywhere. We wake up with an alarm set by a voice assistant, watch movies recommended by AI, and unlock smartphones by holding them to our faces. AI is already deeply integrated into our daily lives. However, it is still not the kind of AI that will bring vitamins with orange juice in the morning or discuss the book on your bedside table. But it can deliver the weather report in a pleasant voice.

Unfortunately (or luckily, depending on your viewpoint), the human body is a complex organism. This means that predicting health indications is significantly more complicated than requesting a song or weather report. One might be happy with Spotify, but will likely be disappointed by most of the diagnostic tools on the market. This is representative of a global trend — AI-based solutions encounter serious obstacles in medicine and healthcare. There is a huge gap between science and business and legal restrictions, not to mention the lack of labeled and publicly available data and high rates of distrust from users.

Meanwhile, the world sorely needs smart solutions to modern medicine’s problems. The ongoing COVID-19 pandemic has shown us that the adoption of ML needs to be accelerated. Many amazing things can be done or created in the near future with the help of AI, including drugs, vaccines, and better clinical services. AI is a powerful tool that can be used to create diagnostic tools, contribute to the expansion of telemedicine, improve the accuracy of diagnoses, and even overcome logistical challenges. However, these valuable solutions are still unavailable to a broad audience.

Why Do We Need AI in Healthcare?

AI offers many benefits to healthcare and medicine. But how do we develop these benefits? Here are several ways AI can create benefits for the healthcare industry:

Image
  • Speeding up processes like drug development, matching patients to trials, processing documents, making appointments, automating planning, and optimizing schedules. ML models can be trained to recognize mentions of drugs and illnesses, find optimal strategies, and recognize certain patterns or images.
  • Improving decision quality can help with diagnoses and treatments, as well as patient monitoring and predictions. One of the aims of AI is to analyze clinical guidelines and patient outcomes and apply this knowledge by producing precise output.
  • Dealing with resource shortages through symptom checkers and telemedicine.
  • Human error reduction by using AI to identify unusual drug dosages or prescriptions and suspicious analysis results.
  • Quick reaction to changing situations by detecting adverse reactions through social media posts and opinion gathering. By leveraging social media and various data sources, healthcare businesses can quickly adapt to a new reality.
  • New discoveries may include significantly more accurate diagnoses, gene editing, and even increased lifespan. Using real-world data (RWD) sources, AI can provide new insights, dependencies, and connections.

Challenges for Adopting AI in Healthcare and Medicine

The adoption of AI in healthcare and medicine has been slow, difficult, and hindered by myriad obstacles. This issue is not new and has been considered by a number of experts. We want to be constructive — optimistic, even — and suggest ways that AI can finally reach hospitals.

For many, it may not be obvious why a powerful tool like AI is not already being used in a clinical setting. Why don’t robots diagnose diseases and propose treatment? The short answer is that, while AI is still improving, obstacles remain on its path to clinical use. These challenges are not always technical. We have compiled extensive research on the topic and condensed the major challenges into a few bullet points.

Cultural and Political Restrictions

Although AI’s utility has been previously demonstrated, it is still unregulated by the government, and no ethical standards for its use have been developed. There are many questions to answer regarding data privacy and responsibility for AI decisions. It is still unclear to what extent AI-generated recommendations can be trusted, so there are many concerns that slow its use in medical settings.

Gap Between Business and Scientists

“While AI carries enormous potential, finding appropriate use cases that generate value is often problematic, and IT teams who build an AI hammer in search of a use-case nail are not always successful in finding the right application of the technology,” states Gartner®. Scientists and developers cannot easily determine what problems the healthcare industry needs to solve, while doctors and managers are not typically fully aware of ML capabilities. Consequently, a successful team must blend the worlds of science and business.

Practice teaches us that science should work hand in hand with business to understand challenges, goals, and restrictions, while business can get the best out of this relationship by gaining insights into AI’s potential and requirements.

User Distrust

Since most AI models are considered “black boxes,” their inner logic is not easily understood, unlike other software. Therefore, the results are not considered reliable by users. Customers need to understand how a model makes decisions, but highly precise models are usually hard to interpret. To overcome this issue, GDPR demands that algorithms working with patient data be explainable. This is a tough demand because it is not clear what exactly “explainable” means. Moreover, some medical experts believe that “black boxes must be black,” like the equipment used in anesthesiology that utilizes complex mathematical transformations of signals.

Lack of Publicly Available Data

AI development is determined by data, and the role of data is crucial for training models. As Archimedes said, “Give me a place to stand with a lever, and I will move the whole world.” A data scientist might say, “Give me data, and I will predict how to move the whole world.” Business keeps its data in silos, hoarding it like treasure. But it is hard, if not impossible, to imagine productive interaction of data science and business without data exchange.

Lack of Experience in Management

While data plays a significant role in AI development, it can be underutilized without competent management and well-established work processes.

Lack of Standards

Gathering data from multiple sources and processing it requires an enormous amount of time. A lack of specifications leads to a wide range of custom solutions, which can make it difficult to use data optimally. To normalize and standardize data and facilitate data exchange, various ontologies have been developed. They cover almost all fields of medicine and science, including SNOMED, NCIT, and AniML.

The Way Forward

The future of AI/ML is complex and requires effort and support from all parties concerned: scientists, developers, business, and government. Unfortunately, these sectors do not always interact effectively.

Image

Based on DataArt’s rich experience in developing AI-based solutions, we know that it is not possible to transfer the ideas and models generated by scientists / data scientists into the business world without distortion. There is a barrier between scientists and businesspeople, and the languages they speak can sound like gibberish to one another. They need a proxy — a company (or a team) that can act as an interpreter, taking the models and ideas developed by scientists and turning them into sustainable solutions that will generate revenue or other resources for a business. This is a complex process that involves many experts, including developers, designers, and business analysts.

The same holds true for AI models. It is not possible to take a model trained once and keep exploiting it as an example for years. The clinical environment is fluid, flexible, and complex. Dictionaries are constantly being updated, new treatments and analytical methods are being developed, and the data fed into the model constantly changes. It is well known that a machine learning model’s performance can deteriorate over time. This has been recently investigated both by journalists from STAT and MIT scientists.

At DataArt, we speak the languages of both worlds and know how to keep models up to date. We have done this for many companies and solved problems in various industries. We believe this explains why Gartner mentioned DataArt in its list of Sample Vendors in the “AI in Clinical Development” field.

While science and business have already been brought together by software development specialists, legal regulation is yet to come. Who should be responsible for AI decisions and mistakes? How do we protect personal data used by smart applications? Are there ways to eliminate discrimination and biases? Only after answering these and many more questions can business build solutions that meet legal, ethical, and societal requirements. It is essential to popularize AI methods to build trust with users. Only knowledge can help eradicate stereotypes and accelerate AI usage.

The Future of AI in Healthcare and Medicine

By 2030, the artificial intelligence healthcare market will be worth $208.2 billion. Pandemic prediction and prevention, deep patient analysis, and constant monitoring to control and prevent possible health issues will be within our grasp, as will the use of VR in training doctors.

At the same time, problems are inevitable. Gartner predicts that, by 2024, 40% of consumers will trick behavior tracking metrics to intentionally devalue personal data collected about them, making it difficult to monetize. It requires a lot of effort to detect and mitigate negative impact on AI model results. Besides, Gartner experts state that, by 2025, synthetic data will reduce personal customer data collection, avoiding 70% of privacy violation sanctions. Since legal regulation of personal data usage may still be undefіned or severely restricted, and user concerns may persist, synthetic data is going to prevail. Even today a wide range of ML projects are based on synthetic data.

No matter how slow and rocky the road to AI adoption in the medical field is, AI still offers a way of integrating science into business. We can see the future gradually approaching, step by step. With steady and constant efforts from teams around the globe, AI will move into every part of the healthcare system. There are many challenges ahead, but the future of AI in healthcare is promising.

 

Disclaimer — GARTNER and HYPE CYCLE are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.