mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-26 13:32:00 +00:00
ci: test safety with starter
We are now testing the safety capability with the starter image. This includes a few changes: * Enable the safety integration test * Relax the shield model requirements from llama-guard to make it work with llama-guard3:8b coming from Ollama * Expose a shield for each inference provider in the starter distro. The shield will only be registered if the provider is enabled. Shields will be added if the provider claims to support a safety model * Missing providers models have been added too * Pointers to official documentation pages for provider models support have been added Closes: https://github.com/meta-llama/llama-stack/issues/2528 Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
parent
cd0ad21111
commit
11c912da0a
20 changed files with 621 additions and 126 deletions
|
|
@ -7,7 +7,8 @@ FROM --platform=linux/amd64 ollama/ollama:latest
|
|||
RUN ollama serve & \
|
||||
sleep 5 && \
|
||||
ollama pull llama3.2:3b-instruct-fp16 && \
|
||||
ollama pull all-minilm:l6-v2
|
||||
ollama pull all-minilm:l6-v2 && \
|
||||
ollama pull llama-guard3:1b
|
||||
|
||||
# Set the entrypoint to start ollama serve
|
||||
ENTRYPOINT ["ollama", "serve"]
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue