mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-24 10:28:03 +00:00
We are now testing the safety capability with the starter image. This includes a few changes: * Enable the safety integration test * Relax the shield model requirements from llama-guard to make it work with llama-guard3:8b coming from Ollama * Expose a shield for each inference provider in the starter distro. The shield will only be registered if the provider is enabled. Shields will be added if the provider claims to support a safety model * Missing providers models have been added too * Pointers to official documentation pages for provider models support have been added Closes: https://github.com/meta-llama/llama-stack/issues/2528 Signed-off-by: Sébastien Han <seb@redhat.com>
14 lines
487 B
Docker
14 lines
487 B
Docker
# Containerfile used to build our all in one ollama image to run tests in CI
|
|
# podman build --platform linux/amd64 -f Containerfile -t ollama-with-models .
|
|
#
|
|
FROM --platform=linux/amd64 ollama/ollama:latest
|
|
|
|
# Start ollama and pull models in a single layer
|
|
RUN ollama serve & \
|
|
sleep 5 && \
|
|
ollama pull llama3.2:3b-instruct-fp16 && \
|
|
ollama pull all-minilm:l6-v2 && \
|
|
ollama pull llama-guard3:1b
|
|
|
|
# Set the entrypoint to start ollama serve
|
|
ENTRYPOINT ["ollama", "serve"]
|