调用本地ollama部署的llm,报错找不到模型,但可以正常返回结果 期望行为 | Expected Behavior 正常返回结果 运行环境 | Environment - OS:ubuntu20.04 - NVIDIA Driver: - CUDA: - Docker Compose: - NVIDIA GPU Memory: QAnything日志 | QAnything l