Ishaan Jaff
0d027b22fd
[Feat-Proxy] Slack Alerting - allow using os.environ/ vars for alert to webhook url ( #5726 )
...
* allow using os.environ for slack urls
* use env vars for webhook urls
* fix types for get_secret
* fix linting
* fix linting
* fix linting
* linting fixes
* linting fix
* docs alerting slack
* fix get data
2024-09-16 18:03:37 -07:00
Krrish Dholakia
2874b94fb1
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 09:22:47 -07:00
Krrish Dholakia
e391e30285
refactor: replace 'traceback.print_exc()' with logging library
...
allows error logs to be in json format for otel logging
2024-06-06 13:47:43 -07:00
Krrish Dholakia
aa4acaf06b
fix(llm_guard.py): enable request-specific llm guard flag
2024-04-08 21:15:33 -07:00
Krrish Dholakia
c10f1d2f25
test(test_llm_guard.py): unit testing for key-level llm guard enabling
2024-03-26 17:55:53 -07:00
Krrish Dholakia
f3a56c5af2
fix(llm_guard.py): working llm-guard 'key-specific' mode
2024-03-26 17:47:20 -07:00
Krrish Dholakia
7bc76ddbc3
feat(llm_guard.py): enable key-specific llm guard check
2024-03-26 17:21:51 -07:00
Ishaan Jaff
f0992c2dbd
(fix) stop using f strings with logger
2024-03-25 10:47:18 -07:00
Krrish Dholakia
0ae4906701
fix(llm_guard.py): await moderation check
2024-03-21 16:55:28 -07:00
Krrish Dholakia
860ed18a2e
fix(llm_guard.py): more logging for llm guard.py
2024-03-21 11:22:52 -07:00
Krrish Dholakia
dec78ee7e5
fix: fix linting issue
2024-03-21 08:05:47 -07:00
Krrish Dholakia
e9cc6b4cc9
feat(proxy_server.py): enable llm api based prompt injection checks
...
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th
e actual llm call using `async_moderation_hook`
2024-03-20 22:43:42 -07:00
Krrish Dholakia
7089b13632
fix(llm_guard.py): add streaming hook for moderation calls
2024-02-20 20:31:32 -08:00
Krrish Dholakia
c7e7d508cb
docs(enterprise.md): add llm guard to docs
2024-02-19 21:05:01 -08:00
Krrish Dholakia
66e4abcb0d
feat(llm_guard.py): support llm guard for content moderation
...
https://github.com/BerriAI/litellm/issues/2056
2024-02-19 20:51:25 -08:00