Merge pull request #4569 from BerriAI/litelm_control_guardrails_api_key

feat - control guardrails per api key
This commit is contained in:
Ishaan Jaff 2024-07-05 19:56:15 -07:00 committed by GitHub
commit 51e7099b13
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 132 additions and 2 deletions

View file

@ -29,6 +29,9 @@ litellm_settings:
- prompt_injection: # your custom name for guardrail
callbacks: [lakera_prompt_injection] # litellm callbacks to use
default_on: true # will run on all llm requests when true
- pii_masking: # your custom name for guardrail
callbacks: [presidio] # use the litellm presidio callback
default_on: false # by default this is off for all requests
- hide_secrets_guard:
callbacks: [hide_secrets]
default_on: false
@ -37,6 +40,12 @@ litellm_settings:
default_on: false
```
:::info
Since `pii_masking` is default Off for all requests, [you can switch it on per API Key](#switch-guardrails-onoff-per-api-key)
:::
### 2. Test it
Run litellm proxy
@ -185,6 +194,85 @@ print(response)
</Tabs>
## Switch Guardrails On/Off Per API Key
❓ Use this when you need to switch guardrails on/off per API Key
**Step 1** Create Key with `pii_masking` On
**NOTE:** We defined `pii_masking` [on step 1](#1-setup-guardrails-on-litellm-proxy-configyaml)
👉 Set `"permissions": {"pii_masking": true}` with either `/key/generate` or `/key/update`
This means the `pii_masking` guardrail is on for all requests from this API Key
:::info
If you need to switch `pii_masking` off for an API Key set `"permissions": {"pii_masking": false}` with either `/key/generate` or `/key/update`
:::
<Tabs>
<TabItem value="/key/generate" label="/key/generate">
```shell
curl --location 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"permissions": {"pii_masking": true}
}'
```
```shell
# {"permissions":{"pii_masking":true},"key":"sk-jNm1Zar7XfNdZXp49Z1kSQ"}
```
</TabItem>
<TabItem value="/key/update" label="/key/update">
```shell
curl --location 'http://0.0.0.0:4000/key/update' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"key": "sk-jNm1Zar7XfNdZXp49Z1kSQ",
"permissions": {"pii_masking": true}
}'
```
```shell
# {"permissions":{"pii_masking":true},"key":"sk-jNm1Zar7XfNdZXp49Z1kSQ"}
```
</TabItem>
</Tabs>
**Step 2** Test it with new key
```shell
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Authorization: Bearer sk-jNm1Zar7XfNdZXp49Z1kSQ' \
--header 'Content-Type: application/json' \
--data '{
"model": "llama3",
"messages": [
{
"role": "user",
"content": "does my phone number look correct - +1 412-612-9992"
}
]
}'
```
Expect to NOT see `+1 412-612-9992` in your server logs on your callback.
:::info
The `pii_masking` guardrail ran on this request because api key=sk-jNm1Zar7XfNdZXp49Z1kSQ has `"permissions": {"pii_masking": true}`
:::
## Spec for `guardrails` on litellm config
@ -208,9 +296,9 @@ litellm_settings:
#### Guardrail: `prompt_injection`: Configuration for detecting and preventing prompt injection attacks.
- `callbacks`: List of LiteLLM callbacks used for this guardrail. [Can be one of `[lakera_prompt_injection, hide_secrets, llmguard_moderations, llamaguard_moderations, google_text_moderation]`](enterprise#content-moderation)
- `callbacks`: List of LiteLLM callbacks used for this guardrail. [Can be one of `[lakera_prompt_injection, hide_secrets, presidio, llmguard_moderations, llamaguard_moderations, google_text_moderation]`](enterprise#content-moderation)
- `default_on`: Boolean flag determining if this guardrail runs on all LLM requests by default.
#### Guardrail: `your-custom-guardrail`: Configuration for a user-defined custom guardrail.
- `callbacks`: List of callbacks for this custom guardrail. Can be one of `[lakera_prompt_injection, hide_secrets, llmguard_moderations, llamaguard_moderations, google_text_moderation]`
- `callbacks`: List of callbacks for this custom guardrail. Can be one of `[lakera_prompt_injection, hide_secrets, presidio, llmguard_moderations, llamaguard_moderations, google_text_moderation]`
- `default_on`: Boolean flag determining if this custom guardrail runs by default, set to false.

View file

@ -1,5 +1,6 @@
from litellm._logging import verbose_proxy_logger
from litellm.proxy.guardrails.init_guardrails import guardrail_name_config_map
from litellm.proxy.proxy_server import UserAPIKeyAuth
from litellm.types.guardrails import *
@ -47,3 +48,44 @@ async def should_proceed_based_on_metadata(data: dict, guardrail_name: str) -> b
return False
return True
async def should_proceed_based_on_api_key(
user_api_key_dict: UserAPIKeyAuth, guardrail_name: str
) -> bool:
"""
checks if this guardrail should be applied to this call
"""
if user_api_key_dict.permissions is not None:
# { prompt_injection: true, rail_2: false }
verbose_proxy_logger.debug(
"Guardrails valid for API Key= %s - checking which to apply",
user_api_key_dict.permissions,
)
if not isinstance(user_api_key_dict.permissions, dict):
verbose_proxy_logger.error(
"API Key permissions must be a dict - %s running guardrail %s",
user_api_key_dict,
guardrail_name,
)
return True
for _guardrail_name, should_run in user_api_key_dict.permissions.items():
if should_run is False:
verbose_proxy_logger.debug(
"Guardrail %s skipped because request set to False",
_guardrail_name,
)
continue
# lookup the guardrail in guardrail_name_config_map
guardrail_item: GuardrailItem = guardrail_name_config_map[_guardrail_name]
guardrail_callbacks = guardrail_item.callbacks
if guardrail_name in guardrail_callbacks:
return True
# Do not proceeed if - "metadata": { "guardrails": { "lakera_prompt_injection": false } }
return False
return True