Commit graph

12 commits

Author SHA1 Message Date
Krrish Dholakia
b6cd200676 fix(llm_guard.py): enable request-specific llm guard flag 2024-04-08 21:15:33 -07:00
Krrish Dholakia
1046a63521 test(test_llm_guard.py): unit testing for key-level llm guard enabling 2024-03-26 17:55:53 -07:00
Krrish Dholakia
6d418a2920 fix(llm_guard.py): working llm-guard 'key-specific' mode 2024-03-26 17:47:20 -07:00
Krrish Dholakia
e10eb8f6fe feat(llm_guard.py): enable key-specific llm guard check 2024-03-26 17:21:51 -07:00
Ishaan Jaff
5d121a9f3c (fix) stop using f strings with logger 2024-03-25 10:47:18 -07:00
Krrish Dholakia
b5457beba6 fix(llm_guard.py): await moderation check 2024-03-21 16:55:28 -07:00
Krrish Dholakia
c4dad3f34f fix(llm_guard.py): more logging for llm guard.py 2024-03-21 11:22:52 -07:00
Krrish Dholakia
2ce5de903f fix: fix linting issue 2024-03-21 08:05:47 -07:00
Krrish Dholakia
d91f9a9f50 feat(proxy_server.py): enable llm api based prompt injection checks
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th
e actual llm call using `async_moderation_hook`
2024-03-20 22:43:42 -07:00
Krrish Dholakia
49847347d0 fix(llm_guard.py): add streaming hook for moderation calls 2024-02-20 20:31:32 -08:00
Krrish Dholakia
fde478f70b docs(enterprise.md): add llm guard to docs 2024-02-19 21:05:01 -08:00
Krrish Dholakia
14513af2e2 feat(llm_guard.py): support llm guard for content moderation
https://github.com/BerriAI/litellm/issues/2056
2024-02-19 20:51:25 -08:00