openai.resources
The resources
module aggregates classes and functions for interacting with the OpenAI API into several submodules,
each representing a specific resource or feature of the API.
The submodules' classes mirror the structure of the API's endpoints and offer synchronous and asynchronous communication with the API.
Each resource is accessible as an attribute on the OpenAI
and AsyncOpenAI
clients. To work with a resource, initialize an instance of a client and then access the resource as an attribute
on the client instance. For example, to work with the chat
resource, create an instance of the OpenAI
client and
access the attributes and methods on your_client_instance.chat
.
Modules:
Name | Description |
---|---|
audio |
The Use the
Examples:
|
beta |
The The module aggregates the beta functionalities related to features like yet considered generally available (GA), offering a simplified entry point for interacting with these resources. It is designed to facilitate easy access to the cutting-edge features still under development, enabling developers to experiment with and leverage new capabilities before they become GA. |
chat |
The The module supports both synchronous and asynchronous operations, offering interfaces for direct interaction with the completion endpoints tailored for chat applications. Designed for developers looking to integrate AI-powered chat functionalities into their applications and features like raw and streaming response handling for more flexible integration. |
completions |
The You should not use this module for new projects. The legacy You're strongly encouraged to migrate existing applications to the |
embeddings |
The The module is appropriate for use in applications that require semantic analysis of text, like similarity searches, text clustering, and other natural language processing tasks that can benefit from high-dimensional vector representations of text. |
files |
The The module supports both synchronous and asynchronous operations, along with handling of raw responses and streaming of file content. Designed for use cases that involve managing large datasets or files for purposes like fine-tuning models or using assistants, this module facilitates the efficient handling of file-related operations on the OpenAI platform. |
fine_tuning |
The The module supports synchronous and asynchronous operations, offering interfaces for working with jobs directly, as well as with raw or streaming responses. Designed for use in applications requiring custom model training on specific datasets to improve model performance for tailored tasks. |
images |
The The module supports both synchronous and asynchronous operations, with capabilities for handling raw responses and streaming. Suitable for applications requiring dynamic image generation or modification through the OpenAI API, this module leverages models like DALL-E to interpret text prompts into visual content. |
models |
The The module enables developers to interact with models, providing functionalities like fetching detailed information about a specific model, listing all available models, and deleting fine-tuned models. |
moderations |
The Moderation is particularly useful for developers looking to ensure the content generated or processed by their applications adheres to OpenAI's content policy. By leveraging the content moderation models provided by OpenAI, applications can automatically classify and filter out text that might be considered harmful or inappropriate. |
Classes:
AsyncAudio
AsyncAudio(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
speech |
|
transcriptions |
|
translations |
|
with_raw_response |
|
with_streaming_response |
|
AsyncAudioWithRawResponse
AsyncAudioWithRawResponse(audio: AsyncAudio)
AsyncAudioWithStreamingResponse
AsyncAudioWithStreamingResponse(audio: AsyncAudio)
AsyncBeta
AsyncBeta(client: AsyncOpenAI)
AsyncBetaWithRawResponse
AsyncBetaWithRawResponse(beta: AsyncBeta)
AsyncBetaWithStreamingResponse
AsyncBetaWithStreamingResponse(beta: AsyncBeta)
AsyncChat
AsyncChat(client: AsyncOpenAI)
AsyncChatWithRawResponse
AsyncChatWithRawResponse(chat: AsyncChat)
AsyncChatWithStreamingResponse
AsyncChatWithStreamingResponse(chat: AsyncChat)
AsyncCompletions
AsyncCompletions(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
create |
Creates a completion for the provided prompt and parameters. |
with_raw_response |
|
with_streaming_response |
|
create
async
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
stream: Optional[Literal[False]] | NotGiven = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Completion
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
stream: Literal[True],
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> AsyncStream[Completion]
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
stream: bool,
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Completion | AsyncStream[Completion]
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
stream: (
Optional[Literal[False]] | Literal[True] | NotGiven
) = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Completion | AsyncStream[Completion]
Creates a completion for the provided prompt and parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Union[str, Literal['gpt-3.5-turbo-instruct', 'davinci-002', 'babbage-002']]
|
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. |
required |
prompt
|
Union[str, List[str], Iterable[int], Iterable[Iterable[int]], None]
|
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document. |
required |
best_of
|
Optional[int] | NotGiven
|
Generates When used with Note: Because this parameter generates many completions, it can quickly
consume your token quota. Use carefully and ensure that you have reasonable
settings for |
NOT_GIVEN
|
echo
|
Optional[bool] | NotGiven
|
Echo back the prompt in addition to the completion |
NOT_GIVEN
|
frequency_penalty
|
Optional[float] | NotGiven
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. |
NOT_GIVEN
|
logit_bias
|
Optional[Dict[str, int]] | NotGiven
|
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass |
NOT_GIVEN
|
logprobs
|
Optional[int] | NotGiven
|
Include the log probabilities on the The maximum value for |
NOT_GIVEN
|
max_tokens
|
Optional[int] | NotGiven
|
The maximum number of tokens that can be generated in the completion. The token count of your prompt plus |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly
consume your token quota. Use carefully and ensure that you have reasonable
settings for |
NOT_GIVEN
|
presence_penalty
|
Optional[float] | NotGiven
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. See more information about frequency and presence penalties. |
NOT_GIVEN
|
seed
|
Optional[int] | NotGiven
|
If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same Determinism is not guaranteed, and you should refer to the |
NOT_GIVEN
|
stop
|
Union[Optional[str], List[str], None] | NotGiven
|
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
NOT_GIVEN
|
stream
|
Optional[Literal[False]] | Literal[True] | NotGiven
|
Whether to stream back partial progress. If set, tokens will be sent as
data-only
server-sent events
as they become available, with the stream terminated by a |
NOT_GIVEN
|
suffix
|
Optional[str] | NotGiven
|
The suffix that comes after a completion of inserted text. |
NOT_GIVEN
|
temperature
|
Optional[float] | NotGiven
|
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or |
NOT_GIVEN
|
top_p
|
Optional[float] | NotGiven
|
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
AsyncCompletionsWithRawResponse
AsyncCompletionsWithRawResponse(
completions: AsyncCompletions,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
AsyncCompletionsWithStreamingResponse
AsyncCompletionsWithStreamingResponse(
completions: AsyncCompletions,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
AsyncEmbeddings
AsyncEmbeddings(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
create |
Creates an embedding vector representing the input text. |
with_raw_response |
|
with_streaming_response |
|
create
async
create(
*,
input: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
],
model: Union[
str,
Literal[
"text-embedding-ada-002",
"text-embedding-3-small",
"text-embedding-3-large",
],
],
dimensions: int | NotGiven = NOT_GIVEN,
encoding_format: (
Literal["float", "base64"] | NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> CreateEmbeddingResponse
Creates an embedding vector representing the input text.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Union[str, List[str], Iterable[int], Iterable[Iterable[int]]]
|
Input text to embed, encoded as a string or array of tokens. To embed multiple
inputs in a single request, pass an array of strings or array of token arrays.
The input must not exceed the max input tokens for the model (8192 tokens for
|
required |
model
|
Union[str, Literal['text-embedding-ada-002', 'text-embedding-3-small', 'text-embedding-3-large']]
|
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. |
required |
dimensions
|
int | NotGiven
|
The number of dimensions the resulting output embeddings should have. Only
supported in |
NOT_GIVEN
|
encoding_format
|
Literal['float', 'base64'] | NotGiven
|
The format to return the embeddings in. Can be either |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
AsyncEmbeddingsWithRawResponse
AsyncEmbeddingsWithRawResponse(embeddings: AsyncEmbeddings)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
AsyncEmbeddingsWithStreamingResponse
AsyncEmbeddingsWithStreamingResponse(
embeddings: AsyncEmbeddings,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
AsyncFiles
AsyncFiles(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
content |
Returns the contents of the specified file. |
create |
Upload a file that can be used across various endpoints. |
delete |
Delete a file. |
list |
Returns a list of files that belong to the user's organization. |
retrieve |
Returns information about a specific file. |
retrieve_content |
Returns the contents of the specified file. |
wait_for_processing |
Waits for the given file to be processed, default timeout is 30 mins. |
with_raw_response |
|
with_streaming_response |
|
content
async
content(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> HttpxBinaryResponseContent
Returns the contents of the specified file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
create
async
create(
*,
file: FileTypes,
purpose: Literal["fine-tune", "assistants"],
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> FileObject
Upload a file that can be used across various endpoints.
The size of all the files uploaded by one organization can be up to 100 GB.
The size of individual files can be a maximum of 512 MB or 2 million tokens for
Assistants. See the
Assistants Tools guide to
learn more about the types of files supported. The Fine-tuning API only supports
.jsonl
files.
Please contact us if you need to increase these storage limits.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file
|
FileTypes
|
The File object (not file name) to be uploaded. |
required |
purpose
|
Literal['fine-tune', 'assistants']
|
The intended purpose of the uploaded file. Use "fine-tune" for Fine-tuning and "assistants" for Assistants and Messages. This allows us to validate the format of the uploaded file is correct for fine-tuning. |
required |
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
delete
async
delete(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> FileDeleted
Delete a file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
list
list(
*,
purpose: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> AsyncPaginator[FileObject, AsyncPage[FileObject]]
Returns a list of files that belong to the user's organization.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
purpose
|
str | NotGiven
|
Only return files with the given purpose. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
retrieve
async
retrieve(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> FileObject
Returns information about a specific file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
retrieve_content
async
retrieve_content(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> str
Returns the contents of the specified file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
wait_for_processing
async
wait_for_processing(
id: str,
*,
poll_interval: float = 5.0,
max_wait_seconds: float = 30 * 60
) -> FileObject
Waits for the given file to be processed, default timeout is 30 mins.
AsyncFilesWithRawResponse
AsyncFilesWithRawResponse(files: AsyncFiles)
AsyncFilesWithStreamingResponse
AsyncFilesWithStreamingResponse(files: AsyncFiles)
AsyncFineTuning
AsyncFineTuning(client: AsyncOpenAI)
AsyncFineTuningWithRawResponse
AsyncFineTuningWithRawResponse(
fine_tuning: AsyncFineTuning,
)
AsyncFineTuningWithStreamingResponse
AsyncFineTuningWithStreamingResponse(
fine_tuning: AsyncFineTuning,
)
AsyncImages
AsyncImages(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
create_variation |
Creates a variation of a given image. |
edit |
Creates an edited or extended image given an original image and a prompt. |
generate |
Creates an image given a prompt. |
with_raw_response |
|
with_streaming_response |
|
create_variation
async
create_variation(
*,
image: FileTypes,
model: (
Union[str, Literal["dall-e-2"], None] | NotGiven
) = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
response_format: (
Optional[Literal["url", "b64_json"]] | NotGiven
) = NOT_GIVEN,
size: (
Optional[Literal["256x256", "512x512", "1024x1024"]]
| NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ImagesResponse
Creates a variation of a given image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image
|
FileTypes
|
The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square. |
required |
model
|
Union[str, Literal['dall-e-2'], None] | NotGiven
|
The model to use for image generation. Only |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
The number of images to generate. Must be between 1 and 10. For |
NOT_GIVEN
|
response_format
|
Optional[Literal['url', 'b64_json']] | NotGiven
|
The format in which the generated images are returned. Must be one of |
NOT_GIVEN
|
size
|
Optional[Literal['256x256', '512x512', '1024x1024']] | NotGiven
|
The size of the generated images. Must be one of |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
edit
async
edit(
*,
image: FileTypes,
prompt: str,
mask: FileTypes | NotGiven = NOT_GIVEN,
model: (
Union[str, Literal["dall-e-2"], None] | NotGiven
) = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
response_format: (
Optional[Literal["url", "b64_json"]] | NotGiven
) = NOT_GIVEN,
size: (
Optional[Literal["256x256", "512x512", "1024x1024"]]
| NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ImagesResponse
Creates an edited or extended image given an original image and a prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image
|
FileTypes
|
The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask. |
required |
prompt
|
str
|
A text description of the desired image(s). The maximum length is 1000 characters. |
required |
mask
|
FileTypes | NotGiven
|
An additional image whose fully transparent areas (e.g. where alpha is zero)
indicate where |
NOT_GIVEN
|
model
|
Union[str, Literal['dall-e-2'], None] | NotGiven
|
The model to use for image generation. Only |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
The number of images to generate. Must be between 1 and 10. |
NOT_GIVEN
|
response_format
|
Optional[Literal['url', 'b64_json']] | NotGiven
|
The format in which the generated images are returned. Must be one of |
NOT_GIVEN
|
size
|
Optional[Literal['256x256', '512x512', '1024x1024']] | NotGiven
|
The size of the generated images. Must be one of |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
generate
async
generate(
*,
prompt: str,
model: (
Union[str, Literal["dall-e-2", "dall-e-3"], None]
| NotGiven
) = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
quality: (
Literal["standard", "hd"] | NotGiven
) = NOT_GIVEN,
response_format: (
Optional[Literal["url", "b64_json"]] | NotGiven
) = NOT_GIVEN,
size: (
Optional[
Literal[
"256x256",
"512x512",
"1024x1024",
"1792x1024",
"1024x1792",
]
]
| NotGiven
) = NOT_GIVEN,
style: (
Optional[Literal["vivid", "natural"]] | NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ImagesResponse
Creates an image given a prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
A text description of the desired image(s). The maximum length is 1000
characters for |
required |
model
|
Union[str, Literal['dall-e-2', 'dall-e-3'], None] | NotGiven
|
The model to use for image generation. |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
The number of images to generate. Must be between 1 and 10. For |
NOT_GIVEN
|
quality
|
Literal['standard', 'hd'] | NotGiven
|
The quality of the image that will be generated. |
NOT_GIVEN
|
response_format
|
Optional[Literal['url', 'b64_json']] | NotGiven
|
The format in which the generated images are returned. Must be one of |
NOT_GIVEN
|
size
|
Optional[Literal['256x256', '512x512', '1024x1024', '1792x1024', '1024x1792']] | NotGiven
|
The size of the generated images. Must be one of |
NOT_GIVEN
|
style
|
Optional[Literal['vivid', 'natural']] | NotGiven
|
The style of the generated images. Must be one of |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
AsyncImagesWithRawResponse
AsyncImagesWithRawResponse(images: AsyncImages)
Attributes:
Name | Type | Description |
---|---|---|
create_variation |
|
|
edit |
|
|
generate |
|
AsyncImagesWithStreamingResponse
AsyncImagesWithStreamingResponse(images: AsyncImages)
Attributes:
Name | Type | Description |
---|---|---|
create_variation |
|
|
edit |
|
|
generate |
|
AsyncModels
AsyncModels(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
delete |
Delete a fine-tuned model. |
list |
Lists the currently available models, and provides basic information about each |
retrieve |
Retrieves a model instance, providing basic information about the model such as |
with_raw_response |
|
with_streaming_response |
|
delete
async
delete(
model: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ModelDeleted
Delete a fine-tuned model.
You must have the Owner role in your organization to delete a model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
list
list(
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> AsyncPaginator[Model, AsyncPage[Model]]
Lists the currently available models, and provides basic information about each one such as the owner and availability.
retrieve
async
retrieve(
model: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Model
Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
AsyncModelsWithRawResponse
AsyncModelsWithRawResponse(models: AsyncModels)
AsyncModelsWithStreamingResponse
AsyncModelsWithStreamingResponse(models: AsyncModels)
AsyncModerations
AsyncModerations(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
create |
Classifies if text is potentially harmful. |
with_raw_response |
|
with_streaming_response |
|
create
async
create(
*,
input: Union[str, List[str]],
model: (
Union[
str,
Literal[
"text-moderation-latest",
"text-moderation-stable",
],
]
| NotGiven
) = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ModerationCreateResponse
Classifies if text is potentially harmful.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Union[str, List[str]]
|
The input text to classify |
required |
model
|
Union[str, Literal['text-moderation-latest', 'text-moderation-stable']] | NotGiven
|
Two content moderations models are available: The default is |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
AsyncModerationsWithRawResponse
AsyncModerationsWithRawResponse(
moderations: AsyncModerations,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
AsyncModerationsWithStreamingResponse
AsyncModerationsWithStreamingResponse(
moderations: AsyncModerations,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
Audio
Audio(client: OpenAI)
Methods:
Name | Description |
---|---|
speech |
|
transcriptions |
|
translations |
|
with_raw_response |
|
with_streaming_response |
|
AudioWithRawResponse
AudioWithRawResponse(audio: Audio)
AudioWithStreamingResponse
AudioWithStreamingResponse(audio: Audio)
Beta
Beta(client: OpenAI)
BetaWithStreamingResponse
BetaWithStreamingResponse(beta: Beta)
Chat
Chat(client: OpenAI)
ChatWithRawResponse
ChatWithRawResponse(chat: Chat)
ChatWithStreamingResponse
ChatWithStreamingResponse(chat: Chat)
Completions
Completions(client: OpenAI)
Methods:
Name | Description |
---|---|
create |
Creates a completion for the provided prompt and parameters. |
with_raw_response |
|
with_streaming_response |
|
create
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
stream: Optional[Literal[False]] | NotGiven = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Completion
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
stream: Literal[True],
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Stream[Completion]
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
stream: bool,
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Completion | Stream[Completion]
create(
*,
model: Union[
str,
Literal[
"gpt-3.5-turbo-instruct",
"davinci-002",
"babbage-002",
],
],
prompt: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
None,
],
best_of: Optional[int] | NotGiven = NOT_GIVEN,
echo: Optional[bool] | NotGiven = NOT_GIVEN,
frequency_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
logit_bias: (
Optional[Dict[str, int]] | NotGiven
) = NOT_GIVEN,
logprobs: Optional[int] | NotGiven = NOT_GIVEN,
max_tokens: Optional[int] | NotGiven = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
presence_penalty: (
Optional[float] | NotGiven
) = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
stop: (
Union[Optional[str], List[str], None] | NotGiven
) = NOT_GIVEN,
stream: (
Optional[Literal[False]] | Literal[True] | NotGiven
) = NOT_GIVEN,
suffix: Optional[str] | NotGiven = NOT_GIVEN,
temperature: Optional[float] | NotGiven = NOT_GIVEN,
top_p: Optional[float] | NotGiven = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Completion | Stream[Completion]
Creates a completion for the provided prompt and parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Union[str, Literal['gpt-3.5-turbo-instruct', 'davinci-002', 'babbage-002']]
|
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. |
required |
prompt
|
Union[str, List[str], Iterable[int], Iterable[Iterable[int]], None]
|
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document. |
required |
best_of
|
Optional[int] | NotGiven
|
Generates When used with Note: Because this parameter generates many completions, it can quickly
consume your token quota. Use carefully and ensure that you have reasonable
settings for |
NOT_GIVEN
|
echo
|
Optional[bool] | NotGiven
|
Echo back the prompt in addition to the completion |
NOT_GIVEN
|
frequency_penalty
|
Optional[float] | NotGiven
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. |
NOT_GIVEN
|
logit_bias
|
Optional[Dict[str, int]] | NotGiven
|
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass |
NOT_GIVEN
|
logprobs
|
Optional[int] | NotGiven
|
Include the log probabilities on the The maximum value for |
NOT_GIVEN
|
max_tokens
|
Optional[int] | NotGiven
|
The maximum number of tokens that can be generated in the completion. The token count of your prompt plus |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly
consume your token quota. Use carefully and ensure that you have reasonable
settings for |
NOT_GIVEN
|
presence_penalty
|
Optional[float] | NotGiven
|
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. See more information about frequency and presence penalties. |
NOT_GIVEN
|
seed
|
Optional[int] | NotGiven
|
If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same Determinism is not guaranteed, and you should refer to the |
NOT_GIVEN
|
stop
|
Union[Optional[str], List[str], None] | NotGiven
|
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
NOT_GIVEN
|
stream
|
Optional[Literal[False]] | Literal[True] | NotGiven
|
Whether to stream back partial progress. If set, tokens will be sent as
data-only
server-sent events
as they become available, with the stream terminated by a |
NOT_GIVEN
|
suffix
|
Optional[str] | NotGiven
|
The suffix that comes after a completion of inserted text. |
NOT_GIVEN
|
temperature
|
Optional[float] | NotGiven
|
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or |
NOT_GIVEN
|
top_p
|
Optional[float] | NotGiven
|
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
CompletionsWithRawResponse
CompletionsWithRawResponse(completions: Completions)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
CompletionsWithStreamingResponse
CompletionsWithStreamingResponse(completions: Completions)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
Embeddings
Embeddings(client: OpenAI)
Methods:
Name | Description |
---|---|
create |
Creates an embedding vector representing the input text. |
with_raw_response |
|
with_streaming_response |
|
create
create(
*,
input: Union[
str,
List[str],
Iterable[int],
Iterable[Iterable[int]],
],
model: Union[
str,
Literal[
"text-embedding-ada-002",
"text-embedding-3-small",
"text-embedding-3-large",
],
],
dimensions: int | NotGiven = NOT_GIVEN,
encoding_format: (
Literal["float", "base64"] | NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> CreateEmbeddingResponse
Creates an embedding vector representing the input text.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Union[str, List[str], Iterable[int], Iterable[Iterable[int]]]
|
Input text to embed, encoded as a string or array of tokens. To embed multiple
inputs in a single request, pass an array of strings or array of token arrays.
The input must not exceed the max input tokens for the model (8192 tokens for
|
required |
model
|
Union[str, Literal['text-embedding-ada-002', 'text-embedding-3-small', 'text-embedding-3-large']]
|
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. |
required |
dimensions
|
int | NotGiven
|
The number of dimensions the resulting output embeddings should have. Only
supported in |
NOT_GIVEN
|
encoding_format
|
Literal['float', 'base64'] | NotGiven
|
The format to return the embeddings in. Can be either |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
EmbeddingsWithRawResponse
EmbeddingsWithRawResponse(embeddings: Embeddings)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
EmbeddingsWithStreamingResponse
EmbeddingsWithStreamingResponse(embeddings: Embeddings)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
Files
Files(client: OpenAI)
Methods:
Name | Description |
---|---|
content |
Returns the contents of the specified file. |
create |
Upload a file that can be used across various endpoints. |
delete |
Delete a file. |
list |
Returns a list of files that belong to the user's organization. |
retrieve |
Returns information about a specific file. |
retrieve_content |
Returns the contents of the specified file. |
wait_for_processing |
Waits for the given file to be processed, default timeout is 30 mins. |
with_raw_response |
|
with_streaming_response |
|
content
content(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> HttpxBinaryResponseContent
Returns the contents of the specified file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
create
create(
*,
file: FileTypes,
purpose: Literal["fine-tune", "assistants"],
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> FileObject
Upload a file that can be used across various endpoints.
The size of all the files uploaded by one organization can be up to 100 GB.
The size of individual files can be a maximum of 512 MB or 2 million tokens for
Assistants. See the
Assistants Tools guide to
learn more about the types of files supported. The Fine-tuning API only supports
.jsonl
files.
Please contact us if you need to increase these storage limits.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file
|
FileTypes
|
The File object (not file name) to be uploaded. |
required |
purpose
|
Literal['fine-tune', 'assistants']
|
The intended purpose of the uploaded file. Use "fine-tune" for Fine-tuning and "assistants" for Assistants and Messages. This allows us to validate the format of the uploaded file is correct for fine-tuning. |
required |
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
delete
delete(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> FileDeleted
Delete a file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
list
list(
*,
purpose: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> SyncPage[FileObject]
Returns a list of files that belong to the user's organization.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
purpose
|
str | NotGiven
|
Only return files with the given purpose. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
retrieve
retrieve(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> FileObject
Returns information about a specific file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
retrieve_content
retrieve_content(
file_id: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> str
Returns the contents of the specified file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
wait_for_processing
wait_for_processing(
id: str,
*,
poll_interval: float = 5.0,
max_wait_seconds: float = 30 * 60
) -> FileObject
Waits for the given file to be processed, default timeout is 30 mins.
FilesWithRawResponse
FilesWithRawResponse(files: Files)
FilesWithStreamingResponse
FilesWithStreamingResponse(files: Files)
FineTuning
FineTuning(client: OpenAI)
FineTuningWithRawResponse
FineTuningWithRawResponse(fine_tuning: FineTuning)
FineTuningWithStreamingResponse
FineTuningWithStreamingResponse(fine_tuning: FineTuning)
Images
Images(client: OpenAI)
Methods:
Name | Description |
---|---|
create_variation |
Creates a variation of a given image. |
edit |
Creates an edited or extended image given an original image and a prompt. |
generate |
Creates an image given a prompt. |
with_raw_response |
|
with_streaming_response |
|
create_variation
create_variation(
*,
image: FileTypes,
model: (
Union[str, Literal["dall-e-2"], None] | NotGiven
) = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
response_format: (
Optional[Literal["url", "b64_json"]] | NotGiven
) = NOT_GIVEN,
size: (
Optional[Literal["256x256", "512x512", "1024x1024"]]
| NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ImagesResponse
Creates a variation of a given image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image
|
FileTypes
|
The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square. |
required |
model
|
Union[str, Literal['dall-e-2'], None] | NotGiven
|
The model to use for image generation. Only |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
The number of images to generate. Must be between 1 and 10. For |
NOT_GIVEN
|
response_format
|
Optional[Literal['url', 'b64_json']] | NotGiven
|
The format in which the generated images are returned. Must be one of |
NOT_GIVEN
|
size
|
Optional[Literal['256x256', '512x512', '1024x1024']] | NotGiven
|
The size of the generated images. Must be one of |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
edit
edit(
*,
image: FileTypes,
prompt: str,
mask: FileTypes | NotGiven = NOT_GIVEN,
model: (
Union[str, Literal["dall-e-2"], None] | NotGiven
) = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
response_format: (
Optional[Literal["url", "b64_json"]] | NotGiven
) = NOT_GIVEN,
size: (
Optional[Literal["256x256", "512x512", "1024x1024"]]
| NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ImagesResponse
Creates an edited or extended image given an original image and a prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image
|
FileTypes
|
The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask. |
required |
prompt
|
str
|
A text description of the desired image(s). The maximum length is 1000 characters. |
required |
mask
|
FileTypes | NotGiven
|
An additional image whose fully transparent areas (e.g. where alpha is zero)
indicate where |
NOT_GIVEN
|
model
|
Union[str, Literal['dall-e-2'], None] | NotGiven
|
The model to use for image generation. Only |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
The number of images to generate. Must be between 1 and 10. |
NOT_GIVEN
|
response_format
|
Optional[Literal['url', 'b64_json']] | NotGiven
|
The format in which the generated images are returned. Must be one of |
NOT_GIVEN
|
size
|
Optional[Literal['256x256', '512x512', '1024x1024']] | NotGiven
|
The size of the generated images. Must be one of |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
generate
generate(
*,
prompt: str,
model: (
Union[str, Literal["dall-e-2", "dall-e-3"], None]
| NotGiven
) = NOT_GIVEN,
n: Optional[int] | NotGiven = NOT_GIVEN,
quality: (
Literal["standard", "hd"] | NotGiven
) = NOT_GIVEN,
response_format: (
Optional[Literal["url", "b64_json"]] | NotGiven
) = NOT_GIVEN,
size: (
Optional[
Literal[
"256x256",
"512x512",
"1024x1024",
"1792x1024",
"1024x1792",
]
]
| NotGiven
) = NOT_GIVEN,
style: (
Optional[Literal["vivid", "natural"]] | NotGiven
) = NOT_GIVEN,
user: str | NotGiven = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ImagesResponse
Creates an image given a prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
A text description of the desired image(s). The maximum length is 1000
characters for |
required |
model
|
Union[str, Literal['dall-e-2', 'dall-e-3'], None] | NotGiven
|
The model to use for image generation. |
NOT_GIVEN
|
n
|
Optional[int] | NotGiven
|
The number of images to generate. Must be between 1 and 10. For |
NOT_GIVEN
|
quality
|
Literal['standard', 'hd'] | NotGiven
|
The quality of the image that will be generated. |
NOT_GIVEN
|
response_format
|
Optional[Literal['url', 'b64_json']] | NotGiven
|
The format in which the generated images are returned. Must be one of |
NOT_GIVEN
|
size
|
Optional[Literal['256x256', '512x512', '1024x1024', '1792x1024', '1024x1792']] | NotGiven
|
The size of the generated images. Must be one of |
NOT_GIVEN
|
style
|
Optional[Literal['vivid', 'natural']] | NotGiven
|
The style of the generated images. Must be one of |
NOT_GIVEN
|
user
|
str | NotGiven
|
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
ImagesWithRawResponse
ImagesWithRawResponse(images: Images)
Attributes:
Name | Type | Description |
---|---|---|
create_variation |
|
|
edit |
|
|
generate |
|
ImagesWithStreamingResponse
ImagesWithStreamingResponse(images: Images)
Attributes:
Name | Type | Description |
---|---|---|
create_variation |
|
|
edit |
|
|
generate |
|
Models
Models(client: OpenAI)
Methods:
Name | Description |
---|---|
delete |
Delete a fine-tuned model. |
list |
Lists the currently available models, and provides basic information about each |
retrieve |
Retrieves a model instance, providing basic information about the model such as |
with_raw_response |
|
with_streaming_response |
|
delete
delete(
model: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ModelDeleted
Delete a fine-tuned model.
You must have the Owner role in your organization to delete a model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
list
list(
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> SyncPage[Model]
Lists the currently available models, and provides basic information about each one such as the owner and availability.
retrieve
retrieve(
model: str,
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Model
Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
ModelsWithRawResponse
ModelsWithRawResponse(models: Models)
ModelsWithStreamingResponse
ModelsWithStreamingResponse(models: Models)
Moderations
Moderations(client: OpenAI)
Methods:
Name | Description |
---|---|
create |
Classifies if text is potentially harmful. |
with_raw_response |
|
with_streaming_response |
|
create
create(
*,
input: Union[str, List[str]],
model: (
Union[
str,
Literal[
"text-moderation-latest",
"text-moderation-stable",
],
]
| NotGiven
) = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ModerationCreateResponse
Classifies if text is potentially harmful.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Union[str, List[str]]
|
The input text to classify |
required |
model
|
Union[str, Literal['text-moderation-latest', 'text-moderation-stable']] | NotGiven
|
Two content moderations models are available: The default is |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
ModerationsWithRawResponse
ModerationsWithRawResponse(moderations: Moderations)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
ModerationsWithStreamingResponse
ModerationsWithStreamingResponse(moderations: Moderations)
Attributes:
Name | Type | Description |
---|---|---|
create |
|