openai.api_requestor.APIRequestor.arequest_raw方法中的request_timeout参数可以传递connect和total参数,因此可以在调用openai.api_resources.chat_completion.ChatCompletion.acreate时,设置request_time(10, 300)。 # asyncdefarequest_raw( self, method, url, session, *, params=None, supplied_headers: Optional...
Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60). ServiceUnavailableError: The server is overloaded or not ready yet.. RateLimitError: That model is currently overloaded with other ...
Timeout: It occurs when a request timed out. APIConnectionError: This error is caused by issues connecting to our services. InvalidRequestError: It occurs when your request was malformed or missing some required parameters. AuthenticationError: This error is caused by an invalid, expired, or rev...
"model": self.model_name, "request_timeout": self.request_timeout, "max_tokens": self.max_tokens, "stream": self.streaming, "n": self.n, "temperature": self.temperature, "api_key": self.openai_api_key, "api_base": self.openai_api_base, ...
956 except httpx.TimeoutException as err: File ~/.pyenv/versions/3.11.8/envs/langchain-muke/lib/python3.11/site-packages/httpx/_client.py:914, in Client.send(self, request, stream, auth, follow_redirects) 912 auth = self._build_request_auth(request, auth) –> 914 response = self._sen...
fromlangchain_openaiimportChatOpenAI# Create the model objectllm=ChatOpenAI(model="gpt-4o-audio-preview",# Specifying the modeltemperature=0,# Controls randomness in the outputmax_tokens=None,# Unlimited tokens in output (or specify a max if needed)timeout=None,# Optional: Set a timeout for...
request_timeout = 30, n=n, stop=stop, temperature=temperature ) This is not the entire code or even a reproducible example. I’m only showing both the implementation of the decorator to timeout the entire function. And, the timeout parameterrequest_timeout. ...
287 request_timeout: Optional[Union[float, Tuple[float, float]]] = None, 288 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]: 289 result = self.request_raw( 290 method.lower(), 291 url, (...) 297 request_timeout=request_timeout, 298 ) --> 299 resp, got...
from langchain.chat_models import ChatOpenAI llm = ChatOpenAI( model_name="gpt-4", request_timeout=120, ) openai has no ChatCompletion attribute, this is likely due to an old version of the openai package. Try upgrading it with pip install --upgrade openai. (type=value_error) Expected...
673 options=make_request_options( 674 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 675 ), 676 cast_to=ChatCompletion, 677 stream=stream or False, 678 stream_cls=Stream[ChatCompletionChunk], 679 ) File ~/Documents/Workshop/.venv/lib/python3.12...