forked from phoenix/litellm-mirror
Merge pull request #2037 from BerriAI/litellm_request_level_pii_masking
feat(presidio_pii_masking.py): allow request level controls for turning on/off pii masking
This commit is contained in:
commit
6f77a4a31e
3 changed files with 141 additions and 4 deletions
|
@ -72,3 +72,78 @@ curl --location 'http://0.0.0.0:8000/key/generate' \
|
|||
```
|
||||
|
||||
|
||||
## Turn on/off per request
|
||||
|
||||
The proxy support 2 request-level PII controls:
|
||||
|
||||
- *no-pii*: Optional(bool) - Allow user to turn off pii masking per request.
|
||||
- *output_parse_pii*: Optional(bool) - Allow user to turn off pii output parsing per request.
|
||||
|
||||
### Usage
|
||||
|
||||
**Step 1. Create key with pii permissions**
|
||||
|
||||
Set `allow_pii_controls` to true for a given key. This will allow the user to set request-level PII controls.
|
||||
|
||||
```bash
|
||||
curl --location 'http://0.0.0.0:8000/key/generate' \
|
||||
--header 'Authorization: Bearer my-master-key' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"permissions": {"allow_pii_controls": true}
|
||||
}'
|
||||
```
|
||||
|
||||
**Step 2. Turn off pii output parsing**
|
||||
|
||||
```python
|
||||
import os
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI(
|
||||
# This is the default and can be omitted
|
||||
api_key=os.environ.get("OPENAI_API_KEY"),
|
||||
base_url="http://0.0.0.0:8000"
|
||||
)
|
||||
|
||||
chat_completion = client.chat.completions.create(
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "My name is Jane Doe, my number is 8382043839",
|
||||
}
|
||||
],
|
||||
model="gpt-3.5-turbo",
|
||||
extra_body={
|
||||
"content_safety": {"output_parse_pii": False}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Step 3: See response**
|
||||
|
||||
```
|
||||
{
|
||||
"id": "chatcmpl-8c5qbGTILZa1S4CK3b31yj5N40hFN",
|
||||
"choices": [
|
||||
{
|
||||
"finish_reason": "stop",
|
||||
"index": 0,
|
||||
"message": {
|
||||
"content": "Hi [PERSON], what can I help you with?",
|
||||
"role": "assistant"
|
||||
}
|
||||
}
|
||||
],
|
||||
"created": 1704089632,
|
||||
"model": "gpt-35-turbo",
|
||||
"object": "chat.completion",
|
||||
"system_fingerprint": null,
|
||||
"usage": {
|
||||
"completion_tokens": 47,
|
||||
"prompt_tokens": 12,
|
||||
"total_tokens": 59
|
||||
},
|
||||
"_response_ms": 1753.426
|
||||
}
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue