docs on oauth 2.0

This commit is contained in:
Ishaan Jaff 2024-08-16 13:55:28 -07:00
parent cd28b6607e
commit 0d41e2972b
2 changed files with 58 additions and 0 deletions

View file

@ -0,0 +1,57 @@
# Oauth 2.0 Authentication
Use this if you want to use an Oauth2.0 token to make `/chat`, `/embeddings` requests to the LiteLLM Proxy
## Usage
1. Set env vars:
```bash
export OAUTH_TOKEN_INFO_ENDPOINT="https://your-provider.com/token/info"
export OAUTH_USER_ID_FIELD_NAME="sub"
export OAUTH_USER_ROLE_FIELD_NAME="role"
export OAUTH_USER_TEAM_ID_FIELD_NAME="team_id"
```
- `OAUTH_TOKEN_INFO_ENDPOINT`: URL to validate OAuth tokens
- `OAUTH_USER_ID_FIELD_NAME`: Field in token info response containing user ID
- `OAUTH_USER_ROLE_FIELD_NAME`: Field in token info for user's role
- `OAUTH_USER_TEAM_ID_FIELD_NAME`: Field in token info for user's team ID
2. Enable on litellm config.yaml
Set this on your config.yaml
```yaml
model_list:
- model_name: gpt-4
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
general_settings:
master_key: sk-1234
enable_oauth2_auth: true
```
3. Use token in requests to LiteLLM
```shell
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'
```
## Debugging
Start the LiteLLM Proxy with [`--detailed_debug` mode and you should see more verbose logs](cli.md#detailed_debug)

View file

@ -66,6 +66,7 @@ const sidebars = {
"proxy/customers",
"proxy/billing",
"proxy/token_auth",
"proxy/oauth2",
"proxy/alerting",
"proxy/ui",
"proxy/prometheus",