If you have different version of ROCm installed already you might want to uninstall it using: sudo amdgpu-uninstall --rocmrelease=all You should be able to install ROCm using those commands: sudo apt-get update wget https://repo.radeon.com/amdgpu-install/5.4.3/ubuntu/jammy/amdgpu-install_...
2023-05-25 11:53:19,301 | DEBUG | /home/user/automatic/installer.py | Torch overrides: cuda=False rocm=True ipex=False diml=False 2023-05-25 11:53:19,301 | DEBUG | /home/user/automatic/installer.py | Torch allowed: cuda=False rocm=True ipex=False diml=False 2023-05-25 11:53:...
(a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX2\n' ' - CUDA Runtime 10.2\n' ' - NVCC architecture flags: ' '-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50...
(VllmWorkerProcess pid=11861) INFO 11-05 15:45:57 pynccl.py:63] vLLM is using nccl==2.21.5 INFO 11-05 15:45:57 custom_all_reduce_utils.py:242] reading GPU P2P access cache from /home/jupyter/.cache/vllm/gpu_p2p_access_cache_for_0,1.json (VllmWorkerProcess pid=11861) INFO 1...
This got me to a new error message: ModuleNotFoundError: No module named 'mmcv._ext' My full repro from head using virtual Python environments: $ cat ~/mmagic.sh #!/bin/bash VENV=~/.mmagic # To cleanup, run `deactivate` and `rm -r $VENV`. # Create my virtual environment at $...
Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0 Clang version: Could not collect CMake version: version 3.22.2 ...
LAPACK is enabled (usually provided by MKL) CPU capability usage: AVX2 CUDA Runtime 11.3 NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70...
Installed using source? [yes/no]: no Are you planning to deploy it using the docker container? [yes/no]: yes Is it a CPU or GPU environment?: GPU Expected Behavior Successfully built the docker image for CUDA 10.1 Current Behavior
- NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.6 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-...
runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] k2==1.24.3.dev20230707+cuda11.7.torch1.13.0 [pip3] numpy==1.22.4 [pip3] torch==1.13.0 [pip3] torchaudio==0.13.0 [conda] blas 1.0 mkl [conda] k2 1.24.3.dev...