We add the padding token as a special token to the tokenizer, which in this case requires to resize the token_embeddings as shown below: + +```python +tokenizer.add_special_tokens( + { + + "pad_token": "", + } + ) +model.resize_token_embeddings(model.config.vocab_size + 1)...