forked from phoenix/litellm-mirror
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th e actual llm call using `async_moderation_hook` |
||
---|---|---|
.. | ||
cloudformation_stack | ||
enterprise_callbacks | ||
enterprise_hooks | ||
enterprise_ui | ||
__init__.py | ||
LICENSE.md | ||
README.md | ||
utils.py |
LiteLLM Enterprise
Code in this folder is licensed under a commercial license. Please review the LICENSE file within the /enterprise folder
These features are covered under the LiteLLM Enterprise contract
👉 Using in an Enterprise / Need specific features ? Meet with us here
See all Enterprise Features here 👉 Docs