I'm trying to follow this colab-notebook on my server (Ubuntu 22.04.5 LTS, Python 3.12.3, CUDA Version: 12.4) to fine-tune Qwen2 VL on my custom datasets. I got this problem OSError: Can't load the model for 'un
en transformers apache-2.0 unsloth transformers qwen2 Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Qwen2 7b here: https://colab.research.google.com/drive/1mvwsIQWDs2EdZxZQF9pRGnnOvE86MVvR?usp=sharing And a ...