from langchain_openai import ChatOpenAI import os # 定义工具 @tool(description="Add two numbers") def add(a: int, b: int) -> int: return a + b @tool(description="Multiply two numbers") def multiply(a: int, b: int
For context, the innerraw_gen_ai_requesthas both tools calls in its output: llm.openai.choices: "[{'finish_reason': 'tool_calls', 'index': 0, 'logprobs': None, 'message': {'content': None, 'refusal': None, 'role': 'assistant', 'audio': None, 'function_call': None, 'tool_...
assert m.name() == 'openai:gpt-4' assert m.enable_parallel_tool_calls is False Member samuelcolvin Jan 7, 2025 This doesn't really test very much, we should check that that this changes the request made to OpenAI. There are other tests in this file that should give you a hint...
I'm using function calling to call a sequence of native functions. The exact sequence is important but the implemented behavior is the default behavior (if I'm not wrong) and this is implementing parallel_tool_calls = true. I'm wondering if we can have this property in OpenAIPromptExecution...