Multi-scale attentionDense connectionLightweightGroup operationThe intricacy of segmentation is intensified by the morphological variability of cell nuclei. While, the U-Net model can achieve commendable outcomes in such contexts, it encounters difficulties, including semantic inconsistencies between the ...
Secondly, based on a local cross-channel interaction strategy, a lightweight efficient channel attention mechanism (LECA) is designed. The kernel size of 1D convolution is affected by channel number and coefficients. Multi-scale feature input is used to retain more detailed features of different ...
First of all, the multi-scale feature fusion block (MSFFB) can extract multi-scale features by filters with different receptive fields. Secondly, the channel shuffle attention mechanism (CSAM) encourages the flow of the information across feature channels and enhances the ability of feature ...
fitting. Moreover, a multi-scale feature attention module is designed to provide instructive multi-scale attention information for shallow features. Particularly, we propose a novel upscale module, which adopts dual paths to upscale the features by jointly using sub-pixel convolution and nearest ...
The feature extractor of our proposed model consists of three LMSAM modules that effectively extract multi-scale features. Additionally, an attention mechanism is introduced to assign varying weights to features of different scales in different channels. The label classifier consists of two convolutional...
Finally, we design a lightweight multiscale feature extraction network, the PAN-CSP-Network. The newly designed network is named mini and lightweight YOLOv3 (ML-YOLOv3). Based on the helmet dataset, the FLPSs and parameter sizes of ML-YOLOv3 are only 29.7% and 29.4% of those of YOLOv3...
Decomposed large kernels (Peng et al., 2017) are also used to model context, for example, SegNeXt (Guo et al., 2022) utilizes decomposed large kernel convolution and multi-scale feature aggregation to achieve convolutional attention that effectively encodes contextual information. Although these ...
In the decoder, convolutional attention feature fusion module is given. Relative attention weights that contain interactions between channel, height and width are used to aggregate feature maps. Specifically, without a pretrained model, postprocessing or extra data, the lightweight network with ...
The network’s meticulous design achieves a balance between feature extraction precision and computational efficiency, representing an advance in the domain. Figure 3 Overall structure of the proposed model. Full size image Attention mechanism In recent years, the attention mechanism has become one of ...
Feature Pyramid architecture is applied for the multi-scale feature extraction for the micro-Doppler map. Feature enhancement is executed by the stacked Radar-ViT subsequently, in which the fold and unfold operations are added to lower the computational load of the attention mechanism. The ...