Context-aware Attentional Pooling (CAP) for Fine-grained Visual Classification
To address this, we propose a novel context-aware attentional pooling (CAP) that effectively captures subtle changes via sub-pixel gradients, and learns to attend informative integral regions and their importance in discriminating different subcategories without requiring the bounding-box and/or ...
Wang X, Jiang X, Ding H et al (2021) Knowledge-aware deep framework for collaborative skin lesion segmentation and melanoma recognition. Pattern Recogn 120(108):075 Google Scholar Woo S, Park J, Lee JY, et al (2018) Cbam: Convolutional block attention module. In: Proceedings of the Euro...
Our model utilized the self-supervised attention foreground-aware pooling (SAP) and context-aware loss (CAL) techniques to generate high-quality pseudo-labels and improve segmentation accuracy. One of the major challenges in forest fire monitoring using UAVs is the need for manual annotation of ...
3.1. Context-Aware Feature Transformer Network (CaFTNet) The purpose of this paper is to enhance the representational ability of the feature maps and decrease the network model size in pose estimation. The overall architecture of CaFTNet is revealed in Figure 3 in order to show the whole proces...
sensors Article Attention-Based Context Aware Network for Semantic Comprehension of Aerial Scenery Weipeng Shi, Wenhu Qin *, Zhonghua Yun, Peng Ping, Kaiyang Wu and Yuke Qu School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China; bowenroom@seu.edu.cn (W.S.); ...
However, the features extracted based on the method lack the global context-aware; the self-attention mechanism can estimate the relationship between each position pixel and globality while it fails to consider the difference among different classes of pixels. Therefore, the correlation mining is ...
In recent years, Context-Aware Recommender Systems (CARS) have become one of the most active research areas in recommender systems in EBSNs. It is important to incorporate the contextual information into the recommendation process. For example, when incorpating the temporal context, an outdoor meet...
[10] used a relation-aware attention mechanism to construct specific sentence representations for each relation, and then performed sequence labeling to extract their corresponding head and tail entities. Zheng et al. [5] proposed a new labeling scheme and an end-to-end model with a biased ...
Adaptive attention-aware gated recurrent unit for sequential recommendation. Springer Cham 2019, 11447, 317–332. [Google Scholar] Cho, K.; Merrienboer, B.V.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using rnn encoder-decoder for...