HF_HUB_CACHETo configure where repositories from the Hub will be cached locally (models, datasets and spaces). Defaults to "$HF_HOME/hub" (e.g."~/.cache/huggingface/hub"by default). If you want to change the location of the cache, you could change theHF_HUB_CACHEorHF_HOME ...
For models with Parameter-Efficient Fine-Tuning (PEFT) adapters, you should first load the base model, and resize it as you did while training the model (as mentioned in the HuggingFace PEFT Troubleshooting Guide or see this notebook). As an example: from transformers impor...
这是完整版的llama-7b权重文件吗? 这个应该是 https://huggingface.co/huggyllama/llama-7b 为什么这个里面有两组大文件呀,一组的后缀是.safetensors;另一组的后缀是.bin;并且它们的大小是相同的。Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment Assigne...
peft/src/peft/peft_model.py Line 1136 in 1c1c7fd self.base_model.model.generation_config = self.generation_config self.generation variable is not initialized in the model, it is also not part of a class up in the inheritance hierarchy. S...