In a semi-supervised scenario, the Multi-Domain Adversarial Feature Representation (mDAFR) strategy promotes the emergence of features, which are discriminative for the main learning task, while remaining largely invariant to the data sources (course from which the data was captured) in consideration...
Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning笔记 wastelands 珂学家 来自专栏 · 对抗防御 摘要 首先在图像级和语义级恢复的损失函数下分别综合研究了两种用于对抗性鲁棒性增强的像素去噪方法(即现有的基于加法和未探索的基于过滤的方法),表明与现有的基于像素的基于加法...
^Domain-adversarial training of neural networks. Journal of machine learning research, 17(1):2096–2030, 2016. ^ Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. ^In search of lost domain generalization. In...
In this work, we study multi-domain learning for face anti-spoofing (MD-FAS), where a pre-trained FAS model needs to be updated to perform equally well on both source and target domains while only using target domain data for updating. We present a new m
Multi-scale Domain-adversarial Multiple Instance Learning CNN (CVPR2020) - takeuchi-lab/MS-DA-MIL-CNN
Feature-level adaptation methods aim at aligning the two domains in a latent feature space by minimizing the distribution distance, such as the maximum mean discrepancy [13], or leveraging adversarial learning strategies [6]. However, the aligned feature space is not guaranteed to be semantically ...
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation35 code implementations • CVPR 2018 To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model....
& Frey, B. Adversarial autoencoders. Preprint at https://arxiv.org/abs/1511.05644 (2015). Baldi, P. Autoencoders, unsupervised learning, and deep architectures. In ICML Workshop on Unsupervised and Transfer Learning, 37–49 (2012). LeCun, Y., Bengio, Y. & Hinton, G. Deep learning....
we propose a novel disentangled autoencoder (Dis-AE) neural network architecture that can learn domain-invariant data representations for multi-label classification of medical measurements even when the data is influenced by multiple interacting domain shifts at once. The model utilises adversarial traini...
使用对抗学习(Adversarial Training)的UDA(Unsupervised Domain Adaptation)无监督域自适应方法大部分人已经在使用了,但是本文作者发现这些方法没有考量每个域的多模态性质(the multi-modal nature of video within each domain.),即假如我使用其他模态进行协同学习时这种environmental bias会不会变小,或许一种模态下学习...