我在使用hfl/chinese-roberta-wwm-ext-large模型,在下游任务上微调mlm_loss的时候发现loss是300多,并且一直升高; 我用模型测试了几个mask句子任务,发现只有hfl/chinese-roberta-wwm-ext-large有问题,结果如下 我测试使用的是transformers里的TFBertForMaskedLM,具体代
hfl_chinese-roberta-wwm-ext.zip2023-12-04364.18MB 文档 Please use 'Bert' related functions to load this model! Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provideChinese pre-trained BERT with Whole Word Masking. ...