LLMCompiler can be used with open-source models such as LLaMA, as well as OpenAI’s GPT models. Across a range of tasks that exhibit different patterns of parallel function calling, LLMCompiler consistently demonstrated latency speedup, cost saving, and accuracy improvement. For more details, ...
I'm using function calling to call a sequence of native functions. The exact sequence is important but the implemented behavior is the default behavior (if I'm not wrong) and this is implementing parallel_tool_calls = true. I'm wondering if we can have this property in OpenAIPromptExecution...
LLMCompiler can be used with open-source models such as LLaMA, as well as OpenAI’s GPT models. Across a range of tasks that exhibit different patterns of parallel function calling, LLMCompiler consistently demonstrated latency speedup, cost saving, and accuracy improvement. For more details, ...