mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-25 13:49:51 +00:00
# What does this PR do? When podman is used and the registry is omitted, podman will prompt the user. However, we're piping the output of podman to /dev/null and the user will not see the prompt, the script will end abruptly and this is confusing. This commit explicitly uses the docker.io registry for the ollama image and the llama-stack image so that the prompt is avoided. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> ## Test Plan <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* --> I ran the script on a machine with podman and the issue was resolved ## Image Before the fix, this is what would happen: <img width="748" height="95" alt="image" src="https://github.com/user-attachments/assets/9c609f88-c0a8-45e7-a789-834f64f601e5" /> Signed-off-by: Omer Tuchfeld <omer@tuchfeld.dev> |
||
---|---|---|
.. | ||
check-init-py.sh | ||
check-workflows-use-hashes.sh | ||
distro_codegen.py | ||
gen-changelog.py | ||
generate_prompt_format.py | ||
install.sh | ||
provider_codegen.py | ||
run_client_sdk_tests.py | ||
setup_telemetry.sh | ||
unit-tests.sh |