mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-26 17:52:01 +00:00
We are now testing the safety capability with the starter image. This includes a few changes: * Enable the safety integration test * Relax the shield model requirements from llama-guard to make it work with llama-guard3:8b coming from Ollama * Expose a shield for each inference provider in the starter distro. The shield will only be registered if the provider is enabled. Shields will be added if the provider claims to support a safety model * Missing providers models have been added too * Pointers to official documentation pages for provider models support have been added Closes: https://github.com/meta-llama/llama-stack/issues/2528 Signed-off-by: Sébastien Han <seb@redhat.com>
11 lines
339 B
YAML
11 lines
339 B
YAML
name: Setup Ollama
|
|
description: Start Ollama
|
|
runs:
|
|
using: "composite"
|
|
steps:
|
|
- name: Start Ollama
|
|
shell: bash
|
|
run: |
|
|
docker run -d --name ollama -p 11434:11434 docker.io/leseb/ollama-with-models
|
|
# TODO: rebuild an ollama image with llama-guard3:1b
|
|
docker exec ollama ollama pull llama-guard3:1b
|