@misc{open-text-embeddings, author = {Lim Chee Kin}, title = {open-text-embeddings: Open Source Text Embedding Models with OpenAI API-Compatible Endpoint}, year = {2023}, publisher = {GitHub}, journal = {GitHub
Create a compatible JSONL file with sample texts for embedding. You can generate this file with the following command on the Linux command line: echo '{"text": "What was the first car ever driven?"} {"text": "Who served as the 5th President of the United States o...
* Embedding TEI Langchain compatible with OpenAI API Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * TextDoc support list Signed-off-by: Xinyao Wang <xinyao.wang@intel.com> *...
allow URL to be passed in as an environment variable and test that Vanna can still use the model compatible with the API. Call this a "generic OpenAI" class that will allow connecting to a local LM Studio or Lite LLM instance exposing an LLM. Ideally do the same with the embedding class...
I think I see two PRs related to this (#2925and#3642). As others have said, the fact that theapi/embeddingsendpoint doesn't accept an array of inputs AND the difference in the request structure vs. OpenAI's structure (per#2416 (comment)) are both major blocks to using Ollama in a...
This project seems awesome. Thanks for building this. Would it be possible to: Expose a variable for the LLM Endpoint address so systems like Ollama could be used as OpenAI API compatible endpoints? Would you be able to offer the tool / ...
# MODEL_EMBEDDING_NAME=nomic-embed-text # Experimental: Use any OpenAI-compatible API # OPENAI_BASE_URL=https://example.com/v1 # OPENAI_API_KEY= ## === Proxy === # PROXY_SERVER can be a full URL (e.g. http://0.1.2.3:1234) or just an IP and port combo (e.g. 0.1.2.3:123...
env.EMBEDDING_ENGINE || "inherit", 182 + VectorDbSelection: process.env.VECTOR_DB || "lancedb", 183 + }); 184 + await EventLogs.logEvent("api_sent_chat", { 185 + workspaceName: workspace?.name, 186 + chatModel: workspace?.chatModel || "System Default", 187 + }); ...
embedding = torch.mean(data, dim=0) return json.dumps( { "embedding": embedding.tolist(), "token_num": len(self.tokenizer(params["input"]).input_ids), } ) ret = { "embedding": embedding.tolist(), "token_num": len(self.tokenizer(params["input"]).input_ids), } except torch.cu...
Embedding API Transcription & Translation API Speech API Chat Completion API with tools Chat Completion API streaming Chat Completion API with image input Create Image API Create Image Edit API Create Image Variant API As assistant API is still in Beta and is super slow, so we don't have plan...