Medical diagnostics using artificial intelligence (AI) have proven to be less accurate than previously thought. According to a new study, neural networks may not work correctly when analyzing X-ray images. Most often, artificial intelligence makes mistakes when analyzing images of women and people of color.
Bias in AI Analysis
AI may be biased when analyzing medical photographs from different institutions. Even the most advanced generative models are not able to provide the optimal trade-off in prediction. The study highlights that the effectiveness of AI prediction depends on the data used to train the generative model. For example, if the AI was trained on the same types of patients within the same hospital, the data becomes less accurate. However, the situation changes if neural networks are “fed” data about patients from different hospitals.
Study Findings on AI Accuracy
To test the experiment, staff at Beth Israel Deaconess Medical Center in Boston trained AI to predict from images whether patients had one of three conditions: fluid accumulation in the lungs, a collapsed lung, or an enlarged heart. When analyzing patients from five hospitals simultaneously, the authors of the experiment found that in some cases, the neural networks showed inaccuracy in predictions. This issue is especially critical if medical staff at one hospital are using a generative model trained on data from other hospitals, notes NIX Solutions.
We’ll keep you updated on any new developments in AI and medical diagnostics. The findings from this study underline the importance of diverse training data to improve AI accuracy in medical diagnostics.