llama-stack-mirror/llama_stack/providers/impls
Ashwin Bharambe 0a3999a9a4
Use inference APIs for executing Llama Guard (#121)
We should use Inference APIs to execute Llama Guard instead of directly needing to use HuggingFace modeling related code. The actual inference consideration is handled by Inference.
2024-09-28 15:40:06 -07:00
..
ios/inference Drop header from LocalInference.h 2024-09-25 11:27:37 -07:00
meta_reference Use inference APIs for executing Llama Guard (#121) 2024-09-28 15:40:06 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00