llama-stack-mirror/llama_stack/cli/stack
ehhuang 426cac078b
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 0s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 4s
Vector IO Integration Tests / test-matrix (push) Failing after 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test Llama Stack Build / build-single-provider (push) Failing after 3s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 6s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 3s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 3s
Python Package Build Test / build (3.12) (push) Failing after 3s
Python Package Build Test / build (3.13) (push) Failing after 2s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 3s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
Test Llama Stack Build / build (push) Failing after 3s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
UI Tests / ui-tests (22) (push) Successful in 44s
Pre-commit / pre-commit (push) Successful in 1m24s
chore: use uvicorn to start llama stack server everywhere (#3625)
# What does this PR do?
https://github.com/llamastack/llama-stack/pull/3462 allows using uvicorn
to start llama stack server which supports spawning multiple workers.

This PR enables us to launch >1 workers from `llama stack run` (will add
the parameter in a follow-up PR, keeping this PR on simplifying) by
removing the old way of launching stack server and consolidates
launching via uvicorn.run only.


## Test Plan
ran `llama stack run starter`
CI
2025-10-06 14:27:40 +02:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
_build.py feat: include a default inference store during llama stack build (#3373) 2025-09-09 15:54:58 -07:00
build.py revert: "feat(cli): make venv the default image type" (#3196) 2025-08-18 15:31:01 -07:00
list_apis.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
list_providers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
list_stacks.py feat: add llama stack rm command (#2127) 2025-05-21 10:25:51 +02:00
remove.py chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
run.py chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
stack.py feat: add llama stack rm command (#2127) 2025-05-21 10:25:51 +02:00
utils.py refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00