llama-stack-mirror/scripts
Omer Tuchfeld a5309d6ff1 fix(install): explicit docker.io usage
When podman is used and the registry is omitted, podman will prompt the
user. However, we're piping the output of podman to /dev/null and the
user will not see the prompt, the script will end abruptly and this is
confusing.

This commit explicitly uses the docker.io registry for the ollama image
and the llama-stack image so that the prompt is avoided.

Signed-off-by: Omer Tuchfeld <omer@tuchfeld.dev>
2025-07-22 11:27:59 +02:00
..
check-init-py.sh ci: vector_io provider integration tests (#2537) 2025-06-26 17:04:32 -07:00
check-workflows-use-hashes.sh chore: enforce no git tags or branches in external github actions (#2159) 2025-05-14 20:40:06 +02:00
distro_codegen.py chore: remove dead code (#2403) 2025-06-05 21:17:54 +02:00
gen-changelog.py chore: enable ruff for ./scripts too (#1643) 2025-03-18 12:17:21 -07:00
generate_prompt_format.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
install.sh fix(install): explicit docker.io usage 2025-07-22 11:27:59 +02:00
provider_codegen.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
run_client_sdk_tests.py chore: consolidate scripts under ./scripts directory (#1646) 2025-03-17 17:56:30 -04:00
setup_telemetry.sh feat: improve telemetry (#2590) 2025-07-04 17:29:09 +02:00
unit-tests.sh test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00