/home/lks/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library... warn(msg) CUDA SETUP: Detected CUDA version 118
The bitsandbytes library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.The library includes quantization primitives for 8-bit & 4-bit operations, through bitsandbytes.nn....
CUDA SETUP: Something unexpected happened. Please compile from source: git clone git@github.com:TimDettmers/bitsandbytes.git cd bitsandbytes CUDA_VERSION=100 python setup.py install CUDA SETUP: Required library version not found: libbitsandbytes_cuda100.so. Maybe you need to compi...
cd bitsandbytes #CUDA_VERSIONSin{110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 120} #make argumentin{cuda110, cuda11x, cuda12x} #看好自己的cuda版本, cuda11.8 对应 118和cuda11x . CUDA_VERSION=118 make cuda11x python setup.py install #peft git clone https://github....
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so False CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... CUDA SETUP: CUDA runtime path found: /usr/local...
PyTorch Version: 2.3.1+cu118 Devices Name: cuda:0 NVIDIA GeForce RTX 3080 Ti : cudaMallocAsync Type: cuda VRAM Total: 12884246528 VRAM Free: 11597250560 Torch VRAM Total: 0 Torch VRAM Free: 0 Reproduction ComfyUI Error Report Error Details Node Type: Joy_caption Exception Type: RuntimeEr...
8-bit CUDA functions for PyTorch. Contribute to lihuibng/bitsandbytes development by creating an account on GitHub.
=cuda-11.4 elif [[ "$CUDA_VERSION" -eq "115" ]]; then URL=$URL115 FOLDER=cuda-11.5 elif [[ "$CUDA_VERSION" -eq "116" ]]; then URL=$URL116 FOLDER=cuda-11.6 elif [[ "$CUDA_VERSION" -eq "117" ]]; then URL=$URL117 FOLDER=cuda-11.7 elif [[ "$CUDA_VERSION" -eq "118"...
Problem Hello, I'm getting this weird cublasLt error on a lambdalabs H100 with cuda 118, pytorch 2.0.1, python3.10 Miniconda while trying to fine-tune a 3B param open-llama using LORA with 8bit loading. This only happens if we turn on 8b...
/opt/gpt4all/gpt4all-main1/gpt4all-training/.env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda111_nocublaslt.so /opt/gpt4all/gpt4all-main1/gpt4all-training/.env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so /opt/gpt4all/gpt4all-main...