docs on /moderations

This commit is contained in:
Ishaan Jaff 2024-11-27 15:43:15 -08:00
parent a9b564782c
commit 3e162b3b8c
2 changed files with 23 additions and 5 deletions

View file

@ -18,9 +18,19 @@ response = moderation(
```
</TabItem>
<TabItem value="openai" label="LiteLLM Proxy Server">
<TabItem value="proxy" label="LiteLLM Proxy Server">
For `/moderations` endpoint, there is no need
For `/moderations` endpoint, there is **no need to specify `model` in the request or on the litellm config.yaml**
Start litellm proxy server
```
litellm
```
<Tabs>
<TabItem value="python" label="OpenAI Python SDK">
```python
from openai import OpenAI
@ -31,12 +41,13 @@ client = OpenAI(api_key="<proxy-api-key>", base_url="http://0.0.0.0:4000")
response = client.moderations.create(
input="hello from litellm",
model="text-moderation-stable"
model="text-moderation-stable" # optional, defaults to `omni-moderation-latest`
)
print(response)
```
</TabItem>
<TabItem value="curl" label="Curl Request">
```shell
@ -48,6 +59,9 @@ curl --location 'http://0.0.0.0:4000/moderations' \
</TabItem>
</Tabs>
</TabItem>
</Tabs>
## Input Params
LiteLLM accepts and translates the [OpenAI Moderation params](https://platform.openai.com/docs/api-reference/moderations) across all supported providers.
@ -111,3 +125,8 @@ Here's the exact json output and type you can expect from all moderation calls:
```
## **Supported Providers**
| Provider |
|-------------|
| OpenAI |

View file

@ -246,7 +246,6 @@ const sidebars = {
"completion/usage",
],
},
"text_completion",
"embedding/supported_embedding",
"image_generation",
{
@ -262,7 +261,7 @@ const sidebars = {
"batches",
"realtime",
"fine_tuning",
"moderation","
"moderation",
{
type: "link",
label: "Use LiteLLM Proxy with Vertex, Bedrock SDK",