1.论文摘要指出:神经网络模型因为学习共享层用于捕获任务共有的和与任务相关的特征而在mutil-task任务中表现出很不错的性能。但是现有大多数方法在提取特征时,都因为共享特征中含有task-specific特征或者其他任务的噪声而受到影响。对此,该论文提出了adversarial multi-task learning framework用于减少共享空间和特有特征空间...
最终lOSS: Ltask是特有任务的loss,优化它使其变小为了让任务判别器正确分类,使特有空间只含有特有特征。 Ladv是共享空间的loss,优化它为了误导任务判别器,使共享空间只含有共享特征。 Ldiff是冗余特征的loss,优化它为了从共享和特有空间消除共有的冗余特征。 通过最小化L是分类效果越来越好。 实验: 数据集: 前14...
对抗训练用于确保共享空间只包含共有的信息,正交约束用来消除共享和特有空间冗余的特征。 Adversarial Multi-task Learning 黄色lstm用于提取共有特征,灰色提取任务的私有特征。 任务k的公有特征 skt=LSTM(xt,skt−1,θs) 任务k私有的特征 hkt=LSTM(xt,hmt−1,θk) 问题1:如何保证共享编码器提取的是公有特征...
本文提出了对抗shared-private模型 for multi-task learning,一个共享的RNN层 is working adversarially towards a learnable multi-layer perceptron多层感知器,防止它对任务类型做出准确的预测。对抗训练可以使共享空间更纯净并且确保了共享的representation不被task-specific特征污染。 任务分类器 Task Discriminator:将句子的...
论文:Adversarial Multi-task Learning for Text Classification 最近决定每周读一篇GAN的论文。一方面,提升自己的阅读理解能力。另一方面,拓展自己的思路。作为GAN的初学者,有很多表述不当的地方欢迎大家批评指正! 标题:对抗多任务学习用于文本分类。所谓多任务学习(MTL)就是指学习某一类任务的通用知识(focus on learning...
http://nlp.fudan. edu/data/ 1Introduction Multi-tasklearningisaneffectiveapproachto improvetheperformanceofasingletaskwith thehelpofotherrelatedtasks.Recently,neural- basedmodelsformulti-tasklearninghavebe- comeverypopular,rangingfromcomputervision (Misraetal.,2016;Zhangetal.,2014)tonatural languageprocessing...
Deep learningAdversarial learningMulti-task learningMulti-task deep learning is promising to solve multi-label multi-instance visual recognition tasks. However, flexible information sharing in the task group might bring performance bottlenecks to an individual task. To tackle this problem, we propose a ...
The adversarial multi-task learning is only applied in the training phase and thus introduces no additional cost in testing. Furthermore the correlation between performance and algorithmic delays is investigated, and it is observed that the VAD performance degradation is only moderate when lowering the...
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features...
Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation Robust Semantic Segmentation by Redundant Networks With a Layer-Specific Loss Contribution and Majority Vote Improving the affordability of robustness training for DNNs ...