llama-stack-mirror/llama_stack
ehhuang 426cac078b
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 0s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 4s
Vector IO Integration Tests / test-matrix (push) Failing after 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test Llama Stack Build / build-single-provider (push) Failing after 3s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 6s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 3s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 3s
Python Package Build Test / build (3.12) (push) Failing after 3s
Python Package Build Test / build (3.13) (push) Failing after 2s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 3s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
Test Llama Stack Build / build (push) Failing after 3s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
UI Tests / ui-tests (22) (push) Successful in 44s
Pre-commit / pre-commit (push) Successful in 1m24s
chore: use uvicorn to start llama stack server everywhere (#3625)
# What does this PR do?
https://github.com/llamastack/llama-stack/pull/3462 allows using uvicorn
to start llama stack server which supports spawning multiple workers.

This PR enables us to launch >1 workers from `llama stack run` (will add
the parameter in a follow-up PR, keeping this PR on simplifying) by
removing the old way of launching stack server and consolidates
launching via uvicorn.run only.


## Test Plan
ran `llama stack run starter`
CI
2025-10-06 14:27:40 +02:00
..
apis feat(api): add extra_body parameter support with shields example (#3670) 2025-10-03 13:25:09 -07:00
cli chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
core chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
distributions docs: Fix Dell distro documentation code snippets (#3640) 2025-10-02 11:11:30 +02:00
models feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
providers chore: inference=remote::llama-openai-compat does not support /v1/completion (#3683) 2025-10-04 11:36:48 -07:00
strong_typing feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
testing feat(tests): implement test isolation for inference recordings (#3681) 2025-10-04 11:34:18 -07:00
ui chore(ui-deps): bump react-dom and @types/react-dom in /llama_stack/ui (#3693) 2025-10-06 00:02:31 -04:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py feat: auto-detect Console width (#3327) 2025-10-03 10:19:31 +02:00
schema_utils.py feat(api): add extra_body parameter support with shields example (#3670) 2025-10-03 13:25:09 -07:00