For example: from langchain.callbacks import get_openai_callback with get_openai_callback() as cb: result = llm.invoke(prompt, generate_config= {"max_tokens": max_tokens}) print(cb.total_tokens)
langchain get_openai_callback()函数未更新为Azure模型,它们都在那里:gpt-35-turbo-1106 -> gpt-3...
langchain get_openai_callback()函数未更新为Azure模型,它们都在那里:gpt-35-turbo-1106 -> gpt-3...