mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-25 07:12:03 +00:00
We are now testing the safety capability with the starter image. This includes a few changes: * Enable the safety integration test * Relax the shield model requirements from llama-guard to make it work with llama-guard3:8b coming from Ollama * Expose a shield for each inference provider in the starter distro. The shield will only be registered if the provider is enabled. Shields will be added if the provider claims to support a safety model * Missing providers models have been added too * Pointers to official documentation pages for provider models support have been added Closes: https://github.com/meta-llama/llama-stack/issues/2528 Signed-off-by: Sébastien Han <seb@redhat.com> |
||
|---|---|---|
| .. | ||
| anthropic | ||
| bedrock | ||
| cerebras | ||
| cerebras_openai_compat | ||
| databricks | ||
| fireworks | ||
| fireworks_openai_compat | ||
| gemini | ||
| groq | ||
| groq_openai_compat | ||
| llama_openai_compat | ||
| nvidia | ||
| ollama | ||
| openai | ||
| passthrough | ||
| runpod | ||
| sambanova | ||
| sambanova_openai_compat | ||
| tgi | ||
| together | ||
| together_openai_compat | ||
| vllm | ||
| watsonx | ||
| __init__.py | ||