Error handling
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of openai.APIConnectionError is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of openai.APIStatusError is raised, containing status_code and response properties.
All errors inherit from openai.APIError.
import openai
from openai import OpenAI
client = OpenAI()
try:
client.fine_tuning.jobs.create(
model="gpt-3.5-turbo",
training_file="file-abc123",
)
except openai.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except openai.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except openai.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
Error codes are as followed:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 409 | ConflictError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the max_retries option to configure or disable retry settings:
from openai import OpenAI
# Configure the default for all requests:
client = OpenAI(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).chat.completions.create(
messages=[
{
"role": "user",
"content": "How can I get the name of the current day in Node.js?",
}
],
model="gpt-3.5-turbo",
)
Timeouts
By default requests time out after 10 minutes. You can configure this with a timeout option,
which accepts a float or an httpx.Timeout object:
from openai import OpenAI
# Configure the default for all requests:
client = OpenAI(
# 20 seconds (default is 10 minutes)
timeout=20.0,
)
# More granular control:
client = OpenAI(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5 * 1000).chat.completions.create(
messages=[
{
"role": "user",
"content": "How can I list all files in a directory using Python?",
}
],
model="gpt-3.5-turbo",
)
On timeout, an APITimeoutError is thrown.
Note that requests that time out are retried twice by default.