train_result = trainer.train(model_path=model_path) File "/mnt/work/wanfan01/repo/SimCSE-main/simcse/trainers.py", line 464, in train tr_loss += self.training_step(model, inputs) File "/mnt/work/anaconda3/envs/wanfan01_py3/lib/python3.8/site-packages/transformers/trainer.py", line ...
19 data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) 20 ) 21 model.config.use_cache = False # silence the warnings. Please re-enable for inference! 22 trainer.train() File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py:1230...
SAVE_STATE_WARNING = "" # from torch.optim.lr_scheduler import SAVE_STATE_WARNING ## 这一行是新注释掉的。 5.2 ValueError: You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one. -- 换transformers版本,比如3.5.1 5.3 ACT2FN错误 -- 注释掉,不...
Just disable import ShardedDDPOption, which is not used. Still Facing this issue. Details - I am getting the following error, but this error should not be there - cannot import name 'ShardedDDPOption' from 'transformers.trainer' I have the following versions installed - tokenizers-0.19.1 tra...
-OS:Ubuntu18.04-Python:3.9-Transformers:4.30.2-PyTorch:2.0.2+cu118-CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`):True Anything else? No response 🚀3LaniakeaS, Linjiahua, and xueyongfu11 reacted with rocket emoji ...
One of the scripts in the examples/ folder of Accelerate or an officially supportedno_trainerscript in theexamplesfolder of thetransformersrepo (such asrun_no_trainer_glue.py) My own task or dataset (give details below) Reproduction from accelerate import Accelerator ...