llama-stack-mirror/llama_stack/apis/safety
Ashwin Bharambe 0a3999a9a4
Use inference APIs for executing Llama Guard (#121)
We should use Inference APIs to execute Llama Guard instead of directly needing to use HuggingFace modeling related code. The actual inference consideration is handled by Inference.
2024-09-28 15:40:06 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
client.py Use inference APIs for executing Llama Guard (#121) 2024-09-28 15:40:06 -07:00
safety.py [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00