forked from phoenix-oss/llama-stack-mirror
We should use Inference APIs to execute Llama Guard instead of directly needing to use HuggingFace modeling related code. The actual inference consideration is handled by Inference. |
||
---|---|---|
.. | ||
__init__.py | ||
base.py | ||
code_scanner.py | ||
llama_guard.py | ||
prompt_guard.py |