To address the issues of information redundancy and insufficient local structure capture caused by the skip connections, we propose a multi-scale attention gate (MSAG) in the decoding part to improve the accuracy of key region feature extraction with less computational cost. Meanwhile, the dynamic ...
Specifically, we propose a Multi-Scale Adaptive Spatial Attention Gate (MASAG), which dynamically adjusts the receptive field (Local and Global contextual information) to ensure that spatially relevant features are selectively highlighted while minimizing background distractions. Extensive evaluations ...
Attention gate module was used to optimize skip connection so as to gain the significant features. To realize full utilization of effective information, the mU-net proposed by Seo et al. presented an adaptive filtering structure to extract high-resolution edge information and small target information...
In this article, we add attention gate mechanism (AGs) to the jump connection structure, and introduce attention mechanism and multi-scale mechanism to solve the above problems. Our model obtains better segmentation performance while introducing fewer parameters.Proceedings of SPIEYutong Cai...
In that case, the data-driven models are more effective, which have attracted more and more attention. Data-driven models based on historical industrial big data usually make decisions with the online data from the cloud or edge terminals. Data-driven models usually build particular models based ...
Although Att-UNet has the best RVD metrics, which may be attributed to its attention gate that suppresses irrelevant regions, focuses on useful salient features, helps in clustering features, and may obtain lower RVD scores. But Dice coefficient as the main evaluation metric is 7.1% higher for ...
The Swin Transformer module includes two succes- sive modified transformer blocks; the MSA block is re- placed with the window-based multi-head self-attention (W-MSA) and the shifted window-based multi-head self- attention (SW-MSA). In th...
Meanwhile, we introduce the self-attention mechanism of the Transformer to the guided depth map super-resolution task to extract global features through a transformer block that utilizes feature mapping from a semi-coupled convolutional block. In addition, we introduce a multi-scale feature fusion ...
Based on the baseline ResUNet architecture, an attention gate was introduced in the decoder to focus on the lesion region and reduce the possibility of false positives. Zhang et al. [25] presented a novel attention-gate Resunet model known as AGResU-Net. This network integrated residual ...
MAN (Wang et al., 2024) also introduced large kernel convolutional attention and multi-scale large kernel convolution based on GoogleNet (Szegedy et al., 2015) and ConvNeXt (Liu et al., 2022), and its simplified gated spatial attention unit (GSAU) was designed by a Simple Gate(SG) (...