The increasing prevalence of breast cancer has spurred extensive research efforts to combat the rising cases, especially since it has become the second leading cause of death after cardiovascular diseases. Deep learning methods have been widely employed for early disease detection to tackle this challenge, showcasing remarkable classification accuracy and data synthesis to bolster model training. However, these approaches have primarily focused on an unimodal approach, specifically utilizing breast cancer imaging. This limitation restricts the diagnosis process by relying on insufficient information and neglecting a comprehensive understanding of the physical conditions associated with the disease.
Researchers from Queen’s University Belfast, Belfast, and Federal College of Wildlife Management, New‑Bussa, Nigeria, have addressed the challenge of breast cancer image classification using a deep learning approach that combines a twin convolutional neural network (TwinCNN) framework with a binary optimization method for feature fusion and dimensionality reduction. The proposed method is evaluated using digital mammography images and digital histopathology breast biopsy samples, and the experimental results show improved classification accuracy for single modalities and multimodality classification. The study mentions the importance of multimodal image classification and the role of feature dimensionality reduction in improving classifier performance.
The study acknowledges the limited research effort in investigating multimodal images related to breast cancer using deep learning techniques. It highlights the use of Siamese CNN architectures in solving unimodal and some forms of multimodal classification problems in medicine and other domains. The study emphasizes the importance of a multimodal approach for accurate and acceptable classification models in medical image analysis. It mentions the under-utilization of the Siamese neural network technique in recent studies on multimodal medical image classification, which motivates this study.
TwinCNN combines a twin convolutional neural network framework with a hybrid binary optimizer for multimodal breast cancer digital image classification. The proposed multimodal CNN framework’s design includes the algorithmic design and optimization process of the binary optimization method (BEOSA) used for feature selection. The TwinCNN architecture is modeled to extract features from multimodal inputs using convolutional layers, and the BEOSA method is applied to optimize the extracted features. A probability map fusion layer is designed to fuse the multimodal images based on features and predicted labels.
The study evaluates the proposed TwinCNN framework for multimodal breast cancer image classification using digital mammography and digital histopathology breast biopsy samples from benchmark datasets (MIAS and BreakHis). The classification accuracy and area under the curve for single modalities are reported as 0.755 and 0.861871 for histology and 0.791 and 0.638 for mammography. The study also investigates the classification accuracy resulting from the fused feature method, which yields 0.977, 0.913, and 0.667 for histology, mammography, and multimodality, respectively. The findings confirm that multimodal image classification based on combining image features and predicted labels improves performance. The study highlights the contribution of the proposed binary optimizer in reducing feature dimensionality and improving the classifier’s performance.
In conclusion, The study proposes a TwinCNN framework for multimodal breast cancer image classification, combining a twin convolutional neural network with a hybrid binary optimizer. The TwinCNN framework effectively addresses the challenge of multimodal image classification by extracting modality-based features and fusing them using an improved method. The binary optimizer helps reduce feature dimensionality and improve the classifier’s performance. The study results demonstrate that the proposed TwinCNN framework achieves high classification accuracy for single modalities and fused multimodal features. Multimodal image classification based on combining image features and predicted labels improves performance compared to single-modality classification. The study highlights the importance of deep learning methods in addressing the problem of early detection of breast cancer. It supports using multimodal data streams for improved diagnosis and decision-making in medical image analysis.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
The post This Paper Proposes a Novel Deep Learning Approach Combining a Dual/Twin Convolutional Neural Network (TwinCNN) Framework to Address the Challenge of Breast Cancer Image Classification from Multi-Modalities appeared first on MarkTechPost.
#AIShorts #ArtificialIntelligence #DeepLearning #EditorsPick #MachineLearning #Staff #TechNews #Technology #Uncategorized [Source: AI Techpark]