gpt-4o-mini entity_extract_max_gleaning int Number of loops in the entity extraction process, appending history messages 1 entity_summary_to_max_tokens int Maximum token size for each entity summary 500 node_embedding_algorithm str Algorithm for node embedding (currently not used) node2vec node...
# Required Github Tokens of your own GITHUB_AI_TOKEN= # Optional API Keys OPENAI_API_KEY= DEEPSEEK_API_KEY= ANTHROPIC_API_KEY= GEMINI_API_KEY= HUGGINGFACE_API_KEY= GROQ_API_KEY= XAI_API_KEY= Start with CLI Mode [🚨 News: ] We have updated a more easy-to-use command to start ...
We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {...
LLM_BINDING=openai LLM_MODEL=gpt-4o LLM_BINDING_HOST=https://api.openai.com/v1 LLM_BINDING_API_KEY=your_api_key ### Max tokens sent to LLM (less than model context size) MAX_TOKENS=32768 EMBEDDING_BINDING=ollama EMBEDDING_BINDING_HOST=http://localhost:11434 EMBEDDING_MODEL=bge-m3:lates...
tiktoken_model_name = gpt-4o-mini, entity_extract_max_gleaning = 1, entity_summary_to_max_tokens = 500, node_embedding_algorithm = node2vec, node2vec_params = {'dimensions': 1536, 'num_walks': 10, 'walk_length': 40, 'window_size': ...
Deepseek-Chat-V2236B→21B0.7460.5800.7570.3620.3120.516 GPT4o-mini-0.5920.3430.6340.692*0.5920.591 Gemini-1.5-Flash-0.7480.5040.7140.6840.4870.533 Finetuned LLMs Llama3-8b Finetuned8B0.7940.5930.7360.6200.5540.553 GraphRAG Implementations
INFO:openai._base_client:Retrying request to /chat/completions in 1.148000 seconds Finally, it broke with this error response: openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o-mini in organization org-3EfNuOTOsuBTQxJg5nBEBipX on tokens per mi...
There were no errors when building the knowledge base, but there were errors when querying. I use Qwen2.5-7BInstruct-GPTQ-Int4 as the large language model and bge-large-zh-v1.5 as the vector model. Use PDF file as input. please help me !...
INFO:lightrag:Loggerinitializedforworkingdirectory: ../wdDEBUG:lightrag:LightRAGinitwithparam:working_dir=../wd,chunk_token_size=1200,chunk_overlap_token_size=100,tiktoken_model_name=gpt-4o-mini,entity_extract_max_gleaning=1,entity_summary_to_max_tokens=500,node_embedding_algorithm=node2vec,no...
{ "is_scanning": False, @@ -80,61 +101,6 @@ def estimate_tokens(text: str) -> int: return int(tokens) - -# read config.ini -config = configparser.ConfigParser() -config.read("config.ini", "utf-8") -# Redis config -redis_uri = config.get("redis", "uri", fallback=None)...