@misc{open-text-embeddings, author = {Lim Chee Kin}, title = {open-text-embeddings: Open Source Text Embedding Models with OpenAI API-Compatible Endpoint}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/limcheekin/open-text...
* Embedding TEI Langchain compatible with OpenAI API Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * TextDoc support list Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> *...
Create a compatible JSONL file with sample texts for embedding. You can generate this file with the following command on the Linux command line:echo '{"text": "What was the first car ever driven?"} {"text": "Who served as the 5th President of the United States of...
allow URL to be passed in as an environment variable and test that Vanna can still use the model compatible with the API. Call this a "generic OpenAI" class that will allow connecting to a local LM Studio or Lite LLM instance exposing an LLM. Ideally do the same with the embedding class...
glorat commentedon Feb 12, 2024 glorat prabirshrestha mentioned thison Feb 24, 2024 prabirshrestha mentioned thison Mar 4, 2024 add with_api_base and with_org_id and pass model as string for llm::openai to support ollama and other open ai compatible apiAbraxas-365/langchain-rust#17 ...
feat: Implement Ollama embedding … d64c84f feat: Display model name in knowledge settings … 0ea69af feat: Add OpenAI API provider support … b9973bb View details n4ze3m merged commit d9dc186 into next Oct 13, 2024 3 checks passed n4ze3m deleted the openai branch November...
env.EMBEDDING_ENGINE || "inherit", 182 + VectorDbSelection: process.env.VECTOR_DB || "lancedb", 183 + }); 184 + await EventLogs.logEvent("api_sent_chat", { 185 + workspaceName: workspace?.name, 186 + chatModel: workspace?.chatModel || "System Default", 187 + }); ...
/v1/embeddings is modelled after the OpenAI endpoint to create an embedding vector. The parameters model and input are respected, user is ignored. /v1/completions /v1/completions is modelled after the OpenAI endpoint to create a completion. The parameters model, prompt, max_tokens, temperature...
# MODEL_EMBEDDING_NAME=nomic-embed-text # Experimental: Use any OpenAI-compatible API # OPENAI_BASE_URL=https://example.com/v1 # OPENAI_API_KEY= ## === Proxy === # PROXY_SERVER can be a full URL (e.g. http://0.1.2.3:1234) or just an IP and port combo (e.g. 0.1.2.3:123...
This project seems awesome. Thanks for building this. Would it be possible to: Expose a variable for the LLM Endpoint address so systems like Ollama could be used as OpenAI API compatible endpoints? Would you be able to offer the tool / ...