forked from phoenix/litellm-mirror
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th e actual llm call using `async_moderation_hook` |
||
---|---|---|
.. | ||
banned_keywords.py | ||
blocked_user_list.py | ||
google_text_moderation.py | ||
llama_guard.py | ||
llm_guard.py |