moderations
The moderations
module provides functionality to submit text for moderation to determine whether it violates OpenAI's content policy.
Moderation is particularly useful for developers looking to ensure the content generated or processed by their applications adheres to OpenAI's content policy. By leveraging the content moderation models provided by OpenAI, applications can automatically classify and filter out text that might be considered harmful or inappropriate.
Classes:
Name | Description |
---|---|
AsyncModerations |
|
AsyncModerationsWithRawResponse |
|
AsyncModerationsWithStreamingResponse |
|
Moderations |
|
ModerationsWithRawResponse |
|
ModerationsWithStreamingResponse |
|
AsyncModerations
AsyncModerations(client: AsyncOpenAI)
Methods:
Name | Description |
---|---|
create |
Classifies if text is potentially harmful. |
with_raw_response |
|
with_streaming_response |
|
create
async
create(
*,
input: Union[str, List[str]],
model: (
Union[
str,
Literal[
"text-moderation-latest",
"text-moderation-stable",
],
]
| NotGiven
) = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ModerationCreateResponse
Classifies if text is potentially harmful.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Union[str, List[str]]
|
The input text to classify |
required |
model
|
Union[str, Literal['text-moderation-latest', 'text-moderation-stable']] | NotGiven
|
Two content moderations models are available: The default is |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
AsyncModerationsWithRawResponse
AsyncModerationsWithRawResponse(
moderations: AsyncModerations,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
AsyncModerationsWithStreamingResponse
AsyncModerationsWithStreamingResponse(
moderations: AsyncModerations,
)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
Moderations
Moderations(client: OpenAI)
Methods:
Name | Description |
---|---|
create |
Classifies if text is potentially harmful. |
with_raw_response |
|
with_streaming_response |
|
create
create(
*,
input: Union[str, List[str]],
model: (
Union[
str,
Literal[
"text-moderation-latest",
"text-moderation-stable",
],
]
| NotGiven
) = NOT_GIVEN,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | Timeout | None | NotGiven = NOT_GIVEN
) -> ModerationCreateResponse
Classifies if text is potentially harmful.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Union[str, List[str]]
|
The input text to classify |
required |
model
|
Union[str, Literal['text-moderation-latest', 'text-moderation-stable']] | NotGiven
|
Two content moderations models are available: The default is |
NOT_GIVEN
|
extra_headers
|
Headers | None
|
Send extra headers |
None
|
extra_query
|
Query | None
|
Add additional query parameters to the request |
None
|
extra_body
|
Body | None
|
Add additional JSON properties to the request |
None
|
timeout
|
float | Timeout | None | NotGiven
|
Override the client-level default timeout for this request, in seconds |
NOT_GIVEN
|
ModerationsWithRawResponse
ModerationsWithRawResponse(moderations: Moderations)
Attributes:
Name | Type | Description |
---|---|---|
create |
|
ModerationsWithStreamingResponse
ModerationsWithStreamingResponse(moderations: Moderations)
Attributes:
Name | Type | Description |
---|---|---|
create |
|