System Info transformers version: 4.36.0 Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31 Python version: 3.10.12 Huggingface_hub version: 0.19.4 Safetensors version: 0.3.1 Accelerate version: 0.25.0 Accelerate config: not found P...
I see different outputs when generating text with model.generate with and without use_cache argument. Is this intended and how can I combat this? The scores when I use cache (from the second token generated and onwards) are different. AFAIK use_cache is an optimization that shouldn’t effect...
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.warnings.warn(!!! Exception during processing !!! An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local ...
apiVersion: eci.aliyun.com/v1alpha1 kind: DataCache metadata: name: alpaca-lora-7b spec: path: /model/alpaca-lora-7b # 设置模型数据的存储路径 bucket: test # 指定DataCache Bucket dataSource: type: URL options: repoSource: "HuggingFace/Model" # 指定数据源为HuggingFace的模型 repoId: "tloe...
Refs #959 Use GitHub action cache. Checklist: Documentation has been updated, if necessary. Examples have been added, if necessary. Tests have been added, if necessary.
* use gh cache for huggingface models Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * increase hf timeout Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * more timeout Signed-off-by: Michele Dolfi <dol@zurich.ibm.com> * use different cache key in each job Signed-off-by: ...
adapter.py-adapter:278 - WARNING: failed to save the data to cache, error: get_models..EmbeddingType.validate() takes 2 positional arguments but 3 were given Can you please just tell me, that the functionality is not implemented yet for HuggingFacePipeline using local Llama3.1 ...
use_cache我看源码,好像是说把每一层的encode的结果存起来。 但是我没看懂,这个use_cache变量在那里被用了?可不可以把他关掉?我试着将config.json里面的use_cache改成了false,但是代码里面还是true 不知道为啥 那个gengeral_config不是用这个文件生成的吗?
How can I modify the clip_interrogator.py script so that it will look for manually downloaded models in ComfyUI/models/blip/Salesforce/blip-image-captioning-large instead of trying to create a new huggingface cache? Ideally, I would prefer if it didn't connect to huggingface at all. I don...
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. - Use commit hash to look in cache instead of calling head (#18534) · huggingface/transformers@0d0aada