localai-api-1 | 5:02PM DBG GRPC Service for luna-ai-llama2-uncensored.Q4_0.gguf will be running at: '127.0.0.1:33301' localai-api-1 | 5:02PM DBG GRPC Service state dir: /tmp/go-processmanager3078149800 localai-api-1 | 5:02PM DBG GRPC Service Started localai-api-1 | rpc err...
9:09PM DBG GRPC Service Ready 9:09PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:phi-2.Q2_K ContextSize:512 Seed:1785535671 NBatch:512 F16Memory:false MLock:false MMap:tr...
Error gRPC service was encountered when running LocalAI docker image on Apple Silicon.Ran both Apache Karaf and LocalAI on the same host for successful demo run.The Result:Once LocalAI is up and running (this can take a few minutes from start), our Agent command can start using our LLM ...
fix: listmodelservice / welcome endpoint use LOOSE_ONLY by @dave-gray101 in #3791 Exciting New Features 🎉 feat(api): list loaded models in /system by @mudler in #3661 feat: Add Get Token Metrics to GRPC server by @siddimore in #3687 refactor: ListModels Filtering Upgrade by @dave-...
the tutorial I used has an issue, it assume the service is on 8080 but it is on 80 Once I updated the cluster service port, the errors went away. However, I am still confused on basic operations of k8sgpt. ~$ k8sgpt analyze AI Provider: AI not used; --explain not set 0 default...
Increased the sleep time in thesetUpmethod ofbackend/python/openvoice/test.pyfrom 10 to 30 seconds to ensure the gRPC service is fully started before tests run. (backend/python/openvoice/test.py) Changes to installation script: Commented out the block of code inbackend/python/vllm/install.shthat...
Data Independence: Sending sensitive information to a third-party service may not be suitable or permissible for all types of data or organizations. By hosting your own LLM, you retain full control over your data. Scalability: Pay-as-you-go AI services can become expensive, especially when larg...