Are we Ready for Radiologists Powered by AI, and Mistakes?

Anton Dolgikh, Head of AI at DataArt, considers whether we are ready for artificial radiologists, and their mistakes, as solutions are sought to lighten the load on the workforce.

“In this regard, the adoption of an automatic, AI-carrying, decision-making system looks like a strong alternative. Such systems have the obvious advantages of not being influenced by the time of day or by the number of patients, nor do they need breaks as human radiologists do. Moreover, they are continually learning from new cases just as radiologists do. However, problems lurk under this shiny surface of benefits. How should one test such a system before using it in clinical conditions.”

“In medicine, there is no black box that magically solves problems, because one needs to understand both the solution itself and how it is obtained. This need has given rise to the term “explainable AI” with regard to AI in healthcare. It is difficult to perceive what is going on inside the workhorses of image processing, Neural Networks (NNs). It’s merely possible to predict their behaviour. Can we say how many images are needed to train a specific NN with a predefined accuracy level? No... But if the opinions of two radiologists coincide only in 60% of cases, how does one provide a reliable process of labelling the images to train AI?”

“According to the Kim-Mansfield scheme, radiologists’ errors can be divided into 12 types. Most of them are the cognitive errors that are immanent to human nature. What about the white spots in knowledge? AI is only as good at the data. It can also be ignorant about rare cases that were not present in the training data sets.”

View original article here or download PDF.