refactor: remove container from list of run image types (#2178)
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 3s
Integration Tests / test-matrix (http, providers) (push) Failing after 8s
Integration Tests / test-matrix (http, agents) (push) Failing after 11s
Integration Tests / test-matrix (http, datasets) (push) Failing after 12s
Integration Tests / test-matrix (http, scoring) (push) Failing after 10s
Integration Tests / test-matrix (library, inference) (push) Failing after 8s
Integration Tests / test-matrix (http, post_training) (push) Failing after 12s
Integration Tests / test-matrix (http, inference) (push) Failing after 12s
Integration Tests / test-matrix (library, agents) (push) Failing after 10s
Integration Tests / test-matrix (http, inspect) (push) Failing after 12s
Integration Tests / test-matrix (library, datasets) (push) Failing after 10s
Integration Tests / test-matrix (http, tool_runtime) (push) Failing after 12s
Integration Tests / test-matrix (library, inspect) (push) Failing after 7s
Test Llama Stack Build / generate-matrix (push) Successful in 5s
Test Llama Stack Build / build-single-provider (push) Failing after 6s
Integration Tests / test-matrix (library, post_training) (push) Failing after 9s
Integration Tests / test-matrix (library, scoring) (push) Failing after 8s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 6s
Integration Tests / test-matrix (library, providers) (push) Failing after 10s
Integration Tests / test-matrix (library, tool_runtime) (push) Failing after 9s
Test External Providers / test-external-providers (venv) (push) Failing after 7s
Unit Tests / unit-tests (3.10) (push) Failing after 9s
Update ReadTheDocs / update-readthedocs (push) Failing after 7s
Unit Tests / unit-tests (3.12) (push) Failing after 7s
Unit Tests / unit-tests (3.13) (push) Failing after 8s
Test Llama Stack Build / build (push) Failing after 7s
Unit Tests / unit-tests (3.11) (push) Failing after 8s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 30s
Pre-commit / pre-commit (push) Successful in 2m1s

# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]
Removes the ability to run llama stack container images through the
llama stack CLI
Closes #2110
## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]
Run:
```
llama stack run /path/to/run.yaml --image-type container
```
Expected outcome:
```
llama stack run: error: argument --image-type: invalid choice: 'container' (choose from 'conda', 'venv')
```

[//]: # (## Documentation)
This commit is contained in:
Mark Campbell 2025-06-02 08:57:55 +01:00 committed by GitHub
parent b21050935e
commit c7be73fb16
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 41 additions and 70 deletions

View file

@ -260,7 +260,41 @@ Containerfile created successfully in /tmp/tmp.viA3a3Rdsg/ContainerfileFROM pyth
You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml`
```
After this step is successful, you should be able to find the built container image and test it with `llama stack run <path/to/run.yaml>`.
Now set some environment variables for the inference model ID and Llama Stack Port and create a local directory to mount into the container's file system.
```
export INFERENCE_MODEL="llama3.2:3b"
export LLAMA_STACK_PORT=8321
mkdir -p ~/.llama
```
After this step is successful, you should be able to find the built container image and test it with the below Docker command:
```
docker run -d \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
localhost/distribution-ollama:dev \
--port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://host.docker.internal:11434
```
Here are the docker flags and their uses:
* `-d`: Runs the container in the detached mode as a background process
* `-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT`: Maps the container port to the host port for accessing the server
* `-v ~/.llama:/root/.llama`: Mounts the local .llama directory to persist configurations and data
* `localhost/distribution-ollama:dev`: The name and tag of the container image to run
* `--port $LLAMA_STACK_PORT`: Port number for the server to listen on
* `--env INFERENCE_MODEL=$INFERENCE_MODEL`: Sets the model to use for inference
* `--env OLLAMA_URL=http://host.docker.internal:11434`: Configures the URL for the Ollama service
:::
::::