Collecting nvidia-nccl-cu11==2.14.3 (from torch>=2.0.0->vllm) Using cached nvidia_nccl_cu11-2.14.3-py3-none-manylinux1_x86_64.whl (177.1 MB) Collecting nvidia-nvtx-cu11==11.7.91 (from torch>=2.0.0->vllm) Using cached nvidia_nvtx_cu11-11.7.91-py3-none-manylinux1_x86_64.whl...
pip uninstall torch pip install vllm bashirsouidcommentedJun 22, 2023 Oh, silly me, I missed seeing in the docs that CUDA 12 wasn't supported yet 🤦 . Will try out the other docker image tonight. Thanks for the advice and great project!