Merge pull request #4524 from BerriAI/litellm_allow_controlling_guardrails_per_key

[Enterprise] Check if Key should run secret_detection callback
This commit is contained in:
Ishaan Jaff 2024-07-02 18:02:34 -07:00 committed by GitHub
commit 0d852d7011
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 83 additions and 0 deletions

View file

@ -599,6 +599,77 @@ https://api.groq.com/openai/v1/ \
}
```
### Secret Detection On/Off per API Key
❓ Use this when you need to switch guardrails on/off per API Key
**Step 1** Create Key with `hide_secrets` Off
👉 Set `"permissions": {"secret_detection": false}` with either `/key/generate` or `/key/update`
This means the `hide_secrets` guardrail is off for all requests from this API Key
<Tabs>
<TabItem value="/key/generate" label="/key/generate">
```shell
curl --location 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"permissions": {"hide_secrets": false}
}'
```
```shell
# {"permissions":{"hide_secrets":false},"key":"sk-jNm1Zar7XfNdZXp49Z1kSQ"}
```
</TabItem>
<TabItem value="/key/update" label="/key/update">
```shell
curl --location 'http://0.0.0.0:4000/key/update' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"key": "sk-jNm1Zar7XfNdZXp49Z1kSQ",
"permissions": {"hide_secrets": false}
}'
```
```shell
# {"permissions":{"hide_secrets":false},"key":"sk-jNm1Zar7XfNdZXp49Z1kSQ"}
```
</TabItem>
</Tabs>
**Step 2** Test it with new key
```shell
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Authorization: Bearer sk-jNm1Zar7XfNdZXp49Z1kSQ' \
--header 'Content-Type: application/json' \
--data '{
"model": "llama3",
"messages": [
{
"role": "user",
"content": "does my openai key look well formatted OpenAI_API_KEY=sk-1234777"
}
]
}'
```
Expect to see `sk-1234777` in your server logs on your callback.
:::info
The `hide_secrets` guardrail check did not run on this request because api key=sk-jNm1Zar7XfNdZXp49Z1kSQ has `"permissions": {"hide_secrets": false}`
:::
### Content Moderation with LLM Guard
Set the LLM Guard API Base in your environment

View file

@ -32,6 +32,7 @@ from litellm._logging import verbose_proxy_logger
litellm.set_verbose = True
GUARDRAIL_NAME = "hide_secrets"
_custom_plugins_path = "file://" + os.path.join(
os.path.dirname(os.path.abspath(__file__)), "secrets_plugins"
@ -464,6 +465,14 @@ class _ENTERPRISE_SecretDetection(CustomLogger):
return detected_secrets
async def should_run_check(self, user_api_key_dict: UserAPIKeyAuth) -> bool:
if user_api_key_dict.permissions is not None:
if GUARDRAIL_NAME in user_api_key_dict.permissions:
if user_api_key_dict.permissions[GUARDRAIL_NAME] is False:
return False
return True
#### CALL HOOKS - proxy only ####
async def async_pre_call_hook(
self,
@ -475,6 +484,9 @@ class _ENTERPRISE_SecretDetection(CustomLogger):
from detect_secrets import SecretsCollection
from detect_secrets.settings import default_settings
if await self.should_run_check(user_api_key_dict) is False:
return
if "messages" in data and isinstance(data["messages"], list):
for message in data["messages"]:
if "content" in message and isinstance(message["content"], str):