BADKEY=loadModel The program was built for 1 devices Build program log for 'Intel(R) Iris(R) Xe Graphics': -11 (PI_ERROR_BUILD_PROGRAM_FAILURE)Exception caught at file:/home/runner/_work/llm.cpp/llm.cpp/ollama-llama-cpp/ggml/src/ggml-sycl.cpp, line:2927 time=2025-02-05T10:33:...
It's normal to run llm with ollama cli (backend ipex-llm) on mtl igpu, as below $ollama run llama3.1:8b However it's failed to run same model by open-webui. The error like No idea what happens. For webui install, I follow the pip install inwebuiand make it works with ollama...