open-webui | /app/backend/open_webui open-webui | /app/backend open-webui | /app open-webui | INFO:open_webui.env:GLOBAL_LOG_LEVEL: INFO open-webui | ERROR:open_webui.env:Error when testing CUDA but USE_CUDA_DOCKER is true. Resetting USE_CUDA_DOCKER to false: CUDA not ...
Dockerfile.ubi: use CUDA 12.1 instead of 12.4 dtrifiro added 5 commits August 12, 2024 18:31 deps: bump vllm-tgis-adapter to 0.2.4 a58d5f2 Dockerfile.ubi: force using python-installed cuda runtime libraries 6b47904 Dockerfile: use uv pip everywhere (it's faster) 2d71e49 Dock...
Use cuda-dl-base as default base image for docker builds Ubuntu 24.04 as default remove all dependencies. Nsight, UCX are already present in base image Add option to pass in python-versions use ...
I looked in portainer and see that enable Docker Cuda is not set to TRUE. I set it to true, but am still using this image: ghcr.io/open-webui/open-webui:ollama. I don't see that when I run a model that my GPU usage jumps, but I do see my CPU usage go up. I then switc...
&& python3 -m pip install -r tools/ci_build/github/linux/docker/inference/x86_64/python/cpu/scripts/requirements.txt \ && /bin/bash ./build.sh --allow_running_as_root --skip_submodule_sync \ --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ \ ...
Description Serve as example to build and run onnxruntime-gpu with latest software stack. To build docker image: git clone https://github.com/microsoft/onnxruntime cd onnxruntime/dockerfiles docker...
Hi nvidia-docker team, When I use "CUDA Multi-Process Service" aka MPS in nvidia-docker environment, I want to know how should I set the env CUDA_MPS_ACTIVE_THREAD_PERCENTAGE. There were some situations that multi-gpus are needed for one...
thumnail_cuda Build docker build -t ffmpeg . Usage Run the container mounting the current directory to /workspace processing input.mp4 to output.mp4 without any hardware acceleration docker run --rm -it \ --volume $PWD:/workspace \ ffmpeg -i input.mp4 output.avi docker run --rm -it --...
Are you only able to use CUDA versions that are above your host's? 2. Steps to reproduce the issue Running the following command: docker run --rm --gpus all nvidia/cuda:10.2-base nvidia-smi Results in the following output: Unable to find image 'nvidia/cuda:10.2-base' locally 10.2-...
DOCKERTAG: "11.8" CUDA_VER: "11.8.0" DISTRO_ARCH: "amd64" DISTRO_NAME: "centos" DISTRO_VER: "7" SHORT_DESCRIPTION: "conda-forge build image for CentOS 7 on x86_64 with CUDA" DISTRO_NAME: "ubi" DISTRO_VER: "8" SHORT_DESCRIPTION: "conda-forge build image for UBI 8 on x86_64 ...