Skip to content

translations

Classes:

Name Description
AsyncTranslations
AsyncTranslationsWithRawResponse
AsyncTranslationsWithStreamingResponse
Translations
TranslationsWithRawResponse
TranslationsWithStreamingResponse

AsyncTranslations

AsyncTranslations(client: AsyncOpenAI)

Methods:

Name Description
create

Translates audio into English.

with_raw_response
with_streaming_response

create async

create(
    *,
    file: FileTypes,
    model: Union[str, Literal["whisper-1"]],
    prompt: str | NotGiven = NOT_GIVEN,
    response_format: str | NotGiven = NOT_GIVEN,
    temperature: float | NotGiven = NOT_GIVEN,
    extra_headers: Headers | None = None,
    extra_query: Query | None = None,
    extra_body: Body | None = None,
    timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Translation

Translates audio into English.

Parameters:

Name Type Description Default
file FileTypes

The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

required
model Union[str, Literal['whisper-1']]

ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.

required
prompt str | NotGiven

An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English.

NOT_GIVEN
response_format str | NotGiven

The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

NOT_GIVEN
temperature float | NotGiven

The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

NOT_GIVEN
extra_headers Headers | None

Send extra headers

None
extra_query Query | None

Add additional query parameters to the request

None
extra_body Body | None

Add additional JSON properties to the request

None
timeout float | Timeout | None | NotGiven

Override the client-level default timeout for this request, in seconds

NOT_GIVEN

with_raw_response

with_raw_response() -> AsyncTranslationsWithRawResponse

with_streaming_response

with_streaming_response() -> (
    AsyncTranslationsWithStreamingResponse
)

AsyncTranslationsWithRawResponse

AsyncTranslationsWithRawResponse(
    translations: AsyncTranslations,
)

Attributes:

Name Type Description
create

create instance-attribute

create = async_to_raw_response_wrapper(create)

AsyncTranslationsWithStreamingResponse

AsyncTranslationsWithStreamingResponse(
    translations: AsyncTranslations,
)

Attributes:

Name Type Description
create

create instance-attribute

create = async_to_streamed_response_wrapper(create)

Translations

Translations(client: OpenAI)

Methods:

Name Description
create

Translates audio into English.

with_raw_response
with_streaming_response

create

create(
    *,
    file: FileTypes,
    model: Union[str, Literal["whisper-1"]],
    prompt: str | NotGiven = NOT_GIVEN,
    response_format: str | NotGiven = NOT_GIVEN,
    temperature: float | NotGiven = NOT_GIVEN,
    extra_headers: Headers | None = None,
    extra_query: Query | None = None,
    extra_body: Body | None = None,
    timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> Translation

Translates audio into English.

Parameters:

Name Type Description Default
file FileTypes

The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

required
model Union[str, Literal['whisper-1']]

ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.

required
prompt str | NotGiven

An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English.

NOT_GIVEN
response_format str | NotGiven

The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.

NOT_GIVEN
temperature float | NotGiven

The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

NOT_GIVEN
extra_headers Headers | None

Send extra headers

None
extra_query Query | None

Add additional query parameters to the request

None
extra_body Body | None

Add additional JSON properties to the request

None
timeout float | Timeout | None | NotGiven

Override the client-level default timeout for this request, in seconds

NOT_GIVEN

with_raw_response

with_raw_response() -> TranslationsWithRawResponse

with_streaming_response

with_streaming_response() -> (
    TranslationsWithStreamingResponse
)

TranslationsWithRawResponse

TranslationsWithRawResponse(translations: Translations)

Attributes:

Name Type Description
create

create instance-attribute

create = to_raw_response_wrapper(create)

TranslationsWithStreamingResponse

TranslationsWithStreamingResponse(
    translations: Translations,
)

Attributes:

Name Type Description
create

create instance-attribute

create = to_streamed_response_wrapper(create)