See: https://www.reddit.com/r/LocalLLaMA/comments/13gok03/llamacpp_now_officially_supports_gpu_acceleration/ See: https://github.com/ggerganov/llama.cpp#:~:text=acceleration%20using%20the-,CUDA,-cores%20of%20your So the latest release has support for cuda. Polpetta reviewed May 17, 2023...