WARP: Word-level Adversarial ReProgramming The Power of Scale for Parameter-Efficient Prompt Tuning GPT Understands, Too (P-TUNING V1) 论文链接 文章开头照例附上论文链接: 1: Prefix-Tuning: Optimizing Continuous Prompts for
This repository contains code for ACL'2021 Paper WARP: Word-level Adversarial ReProgramming. WARP adds a few trainable embeddings around the input, which causes the masked language model to predict the sentiment of the sentence in the SST-2 task. Transfer learning from pretrained language models re...