• Sun. Nov 24th, 2024

Google AI Proposes a Machine Learning Framework for Understanding AI Models in Medical Imaging

Jun 9, 2024

Recent advancements in machine learning have been actively used to improve the domain of healthcare. Despite performing remarkably well on various tasks, these models are often unable to provide a clear understanding of how specific visual changes affect ML decisions. These AI models have shown great promise and even human capabilities in some cases, but there remains a critical need for explanations of what signals these models have learned. Such explanations are essential to building trust among medical professionals and potentially uncovering novel scientific insights from the data, which are not yet recognized by experts. Google researchers introduced a novel framework, StylEx, that leverages generative AI to address the challenges in the field of medical imaging, especially focusing on the lack of explainability in AI models. 

Current methods for explaining AI models in computer vision, particularly in medical imaging, often rely on techniques that generate heatmaps indicating the importance of different pixels in an image. These methods, while useful for showing the “where” of important features, fall short of explaining the “what” and “why” behind these features. Specifically, they do not typically explain higher-level characteristics like texture, shape, or size that might underlie the model’s decisions. To overcome these limitations, Google’s StylEx leverages a StyleGAN-based image generator guided by a classifier. This approach aims to generate hypotheses by identifying and visualizing visual signals correlated with a classifier’s predictions. 

The workflow involves four key steps: training a classifier to confirm the presence of relevant signals in the imagery, training a StylEx model to generate images guided by the classifier, automatically detecting and visualizing the top visual attributes influencing the classifier, and having an interdisciplinary panel of experts review these findings to formulate hypotheses for future research. First, the proposed workflow starts by training a classifier on a given medical imaging dataset to perform a specific task, ensuring that the classifier achieves high performance (above 0.8 accuracy). This step confirms that the images contain relevant information for the task. 

Second, a StyleGAN2-based generator is trained to produce realistic images while preserving the classifier’s decision-making process. This generator is adapted to focus on attributes that significantly affect the classifier’s output. The third stage involves automatically selecting the top attributes in the StyleSpace of the generator that influence the classifier’s predictions. For each image, the researchers manipulate each coordinate in the StyleSpace to measure its effect on the classification output, identifying attributes that significantly change the prediction. This process results in counterfactual visualizations, where each attribute is independently adjusted to show its impact.

Finally, an interdisciplinary panel of experts, including clinicians, social scientists, and machine learning engineers, reviews these visualizations. This panel interprets the attributes to determine whether they correspond to known clinical features, potential biases, or novel findings. The panel’s insights are then used to generate hypotheses for further research, considering both biological and socio-cultural determinants of health.

In conclusion, the proposed framework enhances the explainability of AI models in medical imaging. By generating counterfactual images and visualizing the attributes that affect classifier predictions, this approach provides a deeper understanding of the “what” behind the model’s decisions. The interdisciplinary panel involvement, beyond physiology or pathophysiology, ensures that these insights are rigorously interpreted, accounting for potential biases and suggesting new avenues for scientific inquiry. 


Check out the Paper and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 44k+ ML SubReddit

The post Google AI Proposes a Machine Learning Framework for Understanding AI Models in Medical Imaging appeared first on MarkTechPost.


#AIShorts #Applications #ArtificialIntelligence #EditorsPick #MachineLearning #Staff #TechNews #Technology
[Source: AI Techpark]

Related Post