forked from phoenix/litellm-mirror
docs(enterprise.md): add llm guard to docs
This commit is contained in:
parent
14513af2e2
commit
fde478f70b
2 changed files with 33 additions and 1 deletions
|
@ -14,6 +14,7 @@ Features here are behind a commercial license in our `/enterprise` folder. [**Se
|
|||
Features:
|
||||
- [ ] Content Moderation with LlamaGuard
|
||||
- [ ] Content Moderation with Google Text Moderations
|
||||
- [ ] Content Moderation with LLM Guard
|
||||
- [ ] Tracking Spend for Custom Tags
|
||||
|
||||
## Content Moderation with LlamaGuard
|
||||
|
@ -48,6 +49,33 @@ callbacks: ["llamaguard_moderations"]
|
|||
llamaguard_unsafe_content_categories: /path/to/llamaguard_prompt.txt
|
||||
```
|
||||
|
||||
## Content Moderation with LLM Guard
|
||||
|
||||
Set the LLM Guard API Base in your environment
|
||||
|
||||
```env
|
||||
LLM_GUARD_API_BASE = "http://0.0.0.0:8000"
|
||||
```
|
||||
|
||||
Add `llmguard_moderations` as a callback
|
||||
|
||||
```yaml
|
||||
litellm_settings:
|
||||
callbacks: ["llmguard_moderations"]
|
||||
```
|
||||
|
||||
Now you can easily test it
|
||||
|
||||
- Make a regular /chat/completion call
|
||||
|
||||
- Check your proxy logs for any statement with `LLM Guard:`
|
||||
|
||||
Expected results:
|
||||
|
||||
```
|
||||
LLM Guard: Received response - {"sanitized_prompt": "hello world", "is_valid": true, "scanners": { "Regex": 0.0 }}
|
||||
```
|
||||
|
||||
## Content Moderation with Google Text Moderation
|
||||
|
||||
Requires your GOOGLE_APPLICATION_CREDENTIALS to be set in your .env (same as VertexAI).
|
||||
|
@ -102,6 +130,8 @@ Here are the category specific values:
|
|||
| "finance" | finance_threshold: 0.1 |
|
||||
| "legal" | legal_threshold: 0.1 |
|
||||
|
||||
|
||||
|
||||
## Tracking Spend for Custom Tags
|
||||
|
||||
Requirements:
|
||||
|
|
|
@ -66,7 +66,9 @@ class _ENTERPRISE_LLMGuard(CustomLogger):
|
|||
analyze_url, json=analyze_payload
|
||||
) as response:
|
||||
redacted_text = await response.json()
|
||||
|
||||
verbose_proxy_logger.info(
|
||||
f"LLM Guard: Received response - {redacted_text}"
|
||||
)
|
||||
if redacted_text is not None:
|
||||
if (
|
||||
redacted_text.get("is_valid", None) is not None
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue