'model': 'qwen2', 'max_tokens': 4000, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/v1', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': True, 'tokens_per_minute':...
ollama pull nomic-embed-text #embedding 3)LLM使用ModelScope的Mistral-7B-Instruct-v0.3 模型链接: # https://modelscope.cn/models/LLM-Research/Mistral-7B-Instruct-v0.3-GGUF modelscope download --model=LLM-Research/Mistral-7B-Instruct-v0.3-GGUF --local_dir . Mistral-7B-Instruct-v0.3.fp16.gg...
encoding_model: cl100k_baseskip_workflows: []llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat model: llama3.1# model_supports_json:true# recommendedifthisisavailableforyour model. max_tokens:2000# request_timeout:180.0api_base: http://127.0.0.1:11434/v1 # api...
辛苦请教一下,graphrag构建索引时执行到create_final_entities报错终止,log内的报错信息显示"Error code: 400 - {'error': {'code': 'InvalidParameter', 'message': 'One or more parameters specified in the request are not valid. Request id: 021724747960210a602', 'param': 'encoding_format', 'type...
encoding_model:cl100k_baseskip_workflows:[]llm:api_key:${GRAPHRAG_API_KEY}type:openai_chat# or azure_openai_chatmodel:gpt-4o-minimodel_supports_json:true# recommended if this is available for your model.# max_tokens: 4000# request_timeout: 180.0# api_base: https://<instance>.openai.az...
encoding_model: cl100k_base skip_workflows: [] llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat model: deepseek-chat #修改 model_supports_json: false # recommended if this is available for your model. api_base: https://api.agicto.cn/v1 #修改 # max_...
encoding_model: cl100k_base skip_workflows: [] llm: api_key: $ {GRAPHRAG_API_KEY} 类型: openai_chat #或 azure_openai_chat 模型: mistral model_supports_json: true api_base: http://localhost:11434/v1 。。。嵌入:async_mode:threaded #或 asyncio llm:api_key:${GRAPHRAG_API_KEY}类型...
encoding_model:cl100k_base skip_workflows:[] llm: api_key:${GRAPHRAG_API_KEY} type:openai_chat#orazure_openai_chat model:gpt-4o-mini model_supports_json:true#recommendedifthisisavailableforyourmodel. #max_tokens:4000 #request_timeout:180.0 ...
encoding_model:cl100k_baseskip_workflows:[]llm:api_key:${GRAPHRAG_API_KEY}type:openai_chat# or azure_openai_chatmodel:gpt-4o-minimodel_supports_json:true# recommended if this is available for your model.# max_tokens: 4000# request_timeout: 180.0# api_base: https://<instance>.openai.az...
) self.model = model self.encoding_name = encoding_name self.max_tokens = max_tokens self.token_encoder = tiktoken.get_encoding(self.encoding_name) self.retry_error_types = retry_error_types self.embedding_dim = 384 # Nomic-embed-text model dimension self.ollama_client = ollama.Client(...