Medical image segmentation, crucial for diagnosis and treatment, often relies on UNet’s symmetrical architecture to delineate organs and lesions accurately. However, UNet’s convolutional nature needs help to capture global semantic information, hindering its efficacy in sophisticated medical tasks. Integrating Transformer architectures aims to address this limitation but hinders high computational costs, making it unsuitable for resource-constrained healthcare settings.
Efforts to boost UNet’s global awareness include augmented convolutional layers, self-attention mechanisms, and image pyramids, yet they fail to effectively model long-range dependencies. Recent studies propose integrating State Space Models (SSMs) to enrich UNet with long-range dependency awareness while maintaining computational efficiency. However, solutions like U-Mamba introduce excessive parameters and computational load, blocking their practicality in mobile healthcare settings.
Researchers from the Key Laboratory of High Confidence Software Technologies, National Engineering Research Center for Software Engineering, Peking University, School of Computer Science, Peking University, and Institute of Artificial Intelligence, Beihang University have proposed LightM-UNet, a lightweight fusion of UNet and Mamba, boasting a mere parameter count of 1M. They have suggested that the Residual Vision Mamba Layer (RVM Layer) is introduced to extract deep features in a pure Mamba manner, amplifying the model’s capability to model long-range spatial dependencies. This approach effectively addresses computational constraints in real medical settings, marking a pioneering effort in integrating Mamba into UNet for optimization.
LightM-UNet uses a lightweight U-shaped architecture that integrates Mamba. It starts with shallow feature extraction via depthwise convolution, followed by Encoder Blocks doubling feature channels and halving resolution. A Bottleneck Block maintains feature map size while modeling long-range dependencies. Decoder Blocks restore image resolution through feature fusion and decoding. The RVM Layer enriches long-range spatial modeling, while the Vision State-Space (VSS) Module augments feature extraction.
LightM-UNet outperforms nnU-Net, SegResNet, UNETR, SwinUNETR, and U-Mamba on the LiTS dataset, achieving superior performance while significantly reducing parameters and computational costs. Compared to U-Mamba, LightM-UNet demonstrates a 2.11% improvement in average mIoU. On the Montgomery&Shenzhen dataset, LightM-UNet surpasses Transformer-based and Mamba-based methods, showcasing remarkable performance with a notably low parameter count, representing reductions of 99.14% and 99.55% compared to nnU-Net and U-Mamba, respectively.
To conclude, the researchers have introduced LightM-UNet, a lightweight network that integrates Mamba. LightM-UNet performs state-of-the-art 2D and 3D segmentation tasks with only 1M parameters. Compared to Transformer-based architectures, it offers over 99% fewer parameters and significantly lower GFLOPS against the latest Transformer-based architectures. This initiates a crucial step towards practical deployment in resource-constrained healthcare settings, optimizing diagnostic accuracy and treatment efficacy. Rigorous ablation studies confirm the effectiveness of this approach, marking the first utilization of Mamba as a lightweight strategy for UNet.
Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our 38k+ ML SubReddit
The post This AI Paper Introduces the Lightweight Mamba UNet (LightM-UNet) that Integrates Mamba and UNet in a Lightweight Framework for Medical Image Segmentation appeared first on MarkTechPost.
#AIPaperSummary #AIShorts #Applications #ArtificialIntelligence #EditorsPick #LargeLanguageModel #Staff #TechNews #Technology #Uncategorized [Source: AI Techpark]