mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
**Summary:** Extend the shorthand run command so it can run successfully when config exists under DISTRIBS_BASE_DIR (i.e. ~/.llama/distributions). For example, imagine you created a new stack using the `llama stack build` command where you named it "my-awesome-llama-stack". ``` $ llama stack build > Enter a name for your Llama Stack (e.g. my-local-stack): my-awesome-llama-stack ``` To run the stack you created you will have to use long config path: ``` llama stack run ~/.llama/distributions/llamastack-my-awesome-llama-stack/my-awesome-llama-stack-run.yaml ``` With this change, you can start it using the stack name instead of full path: ``` llama stack run my-awesome-llama-stack ``` **Test Plan:** Verify command fails when stack doesn't exist ``` python3 -m llama_stack.cli.llama stack run my-test-stack ``` Output [FAILURE] ``` usage: llama stack run [-h] [--port PORT] [--disable-ipv6] config llama stack run: error: File /Users/vladimirivic/.llama/distributions/llamastack-my-test-stack/my-test-stack-run.yaml does not exist. Please run `llama stack build` to generate (and optionally edit) a run.yaml file ``` Create a new stack using `llama stack build`. Name it `my-test-stack`. Verify command runs successfully ``` python3 -m llama_stack.cli.llama stack run my-test-stack ``` Output [SUCCESS] ``` Listening on ['::', '0.0.0.0']:5000 INFO: Started server process [80146] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://['::', '0.0.0.0']:5000 (Press CTRL+C to quit) ``` |
||
---|---|---|
.. | ||
__init__.py | ||
build.py | ||
configure.py | ||
list_apis.py | ||
list_providers.py | ||
run.py | ||
stack.py |