mirror of
				https://github.com/meta-llama/llama-stack.git
				synced 2025-10-25 01:01:13 +00:00 
			
		
		
		
	
	
		
			8 commits
		
	
	
	| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|  | b11bcfde11 | refactor(build): rework CLI commands and build process (1/2) (#2974) 
		
			Some checks failed
		
		
	 SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s Test Llama Stack Build / generate-matrix (push) Successful in 22s Test llama stack list-deps / show-single-provider (push) Failing after 53s Test Llama Stack Build / build-single-provider (push) Failing after 3s Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped Python Package Build Test / build (3.12) (push) Failing after 18s Python Package Build Test / build (3.13) (push) Failing after 24s Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 26s Test Llama Stack Build / build-custom-container-distribution (push) Failing after 27s Unit Tests / unit-tests (3.12) (push) Failing after 26s Vector IO Integration Tests / test-matrix (push) Failing after 44s API Conformance Tests / check-schema-compatibility (push) Successful in 52s Test llama stack list-deps / generate-matrix (push) Successful in 52s Test Llama Stack Build / build (push) Failing after 29s Test External API and Providers / test-external (venv) (push) Failing after 53s Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1m2s Unit Tests / unit-tests (3.13) (push) Failing after 1m30s Test llama stack list-deps / list-deps-from-config (push) Failing after 1m59s Test llama stack list-deps / list-deps (push) Failing after 1m10s UI Tests / ui-tests (22) (push) Successful in 2m26s Pre-commit / pre-commit (push) Successful in 3m8s # What does this PR do? This PR does a few things outlined in #2878 namely: 1. adds `llama stack list-deps` a command which simply takes the build logic and instead of executing one of the `build_...` scripts, it displays all of the providers' dependencies using the `module` and `uv`. 2. deprecated `llama stack build` in favor of `llama stack list-deps` 3. updates all tests to use `list-deps` alongside `build`. PR 2/2 will migrate `llama stack run`'s default behavior to be `llama stack build --run` and use the new `list-deps` command under the hood before running the server. examples of `llama stack list-deps starter` ``` llama stack list-deps starter --format json { "name": "starter", "description": "Quick start template for running Llama Stack with several popular providers. This distribution is intended for CPU-only environments.", "apis": [ { "api": "inference", "provider": "remote::cerebras" }, { "api": "inference", "provider": "remote::ollama" }, { "api": "inference", "provider": "remote::vllm" }, { "api": "inference", "provider": "remote::tgi" }, { "api": "inference", "provider": "remote::fireworks" }, { "api": "inference", "provider": "remote::together" }, { "api": "inference", "provider": "remote::bedrock" }, { "api": "inference", "provider": "remote::nvidia" }, { "api": "inference", "provider": "remote::openai" }, { "api": "inference", "provider": "remote::anthropic" }, { "api": "inference", "provider": "remote::gemini" }, { "api": "inference", "provider": "remote::vertexai" }, { "api": "inference", "provider": "remote::groq" }, { "api": "inference", "provider": "remote::sambanova" }, { "api": "inference", "provider": "remote::azure" }, { "api": "inference", "provider": "inline::sentence-transformers" }, { "api": "vector_io", "provider": "inline::faiss" }, { "api": "vector_io", "provider": "inline::sqlite-vec" }, { "api": "vector_io", "provider": "inline::milvus" }, { "api": "vector_io", "provider": "remote::chromadb" }, { "api": "vector_io", "provider": "remote::pgvector" }, { "api": "files", "provider": "inline::localfs" }, { "api": "safety", "provider": "inline::llama-guard" }, { "api": "safety", "provider": "inline::code-scanner" }, { "api": "agents", "provider": "inline::meta-reference" }, { "api": "telemetry", "provider": "inline::meta-reference" }, { "api": "post_training", "provider": "inline::torchtune-cpu" }, { "api": "eval", "provider": "inline::meta-reference" }, { "api": "datasetio", "provider": "remote::huggingface" }, { "api": "datasetio", "provider": "inline::localfs" }, { "api": "scoring", "provider": "inline::basic" }, { "api": "scoring", "provider": "inline::llm-as-judge" }, { "api": "scoring", "provider": "inline::braintrust" }, { "api": "tool_runtime", "provider": "remote::brave-search" }, { "api": "tool_runtime", "provider": "remote::tavily-search" }, { "api": "tool_runtime", "provider": "inline::rag-runtime" }, { "api": "tool_runtime", "provider": "remote::model-context-protocol" }, { "api": "batches", "provider": "inline::reference" } ], "pip_dependencies": [ "pandas", "opentelemetry-exporter-otlp-proto-http", "matplotlib", "opentelemetry-sdk", "sentence-transformers", "datasets", "pymilvus[milvus-lite]>=2.4.10", "codeshield", "scipy", "torchvision", "tree_sitter", "h11>=0.16.0", "aiohttp", "pymongo", "tqdm", "pythainlp", "pillow", "torch", "emoji", "grpcio>=1.67.1,<1.71.0", "fireworks-ai", "langdetect", "psycopg2-binary", "asyncpg", "redis", "together", "torchao>=0.12.0", "openai", "sentencepiece", "aiosqlite", "google-cloud-aiplatform", "faiss-cpu", "numpy", "sqlite-vec", "nltk", "scikit-learn", "mcp>=1.8.1", "transformers", "boto3", "huggingface_hub", "ollama", "autoevals", "sqlalchemy[asyncio]", "torchtune>=0.5.0", "chromadb-client", "pypdf", "requests", "anthropic", "chardet", "aiosqlite", "fastapi", "fire", "httpx", "uvicorn", "opentelemetry-sdk", "opentelemetry-exporter-otlp-proto-http" ] } ``` <img width="1500" height="420" alt="Screenshot 2025-10-16 at 5 53 03 PM" src="https://github.com/user-attachments/assets/765929fb-93e2-44d7-9c3d-8918b70fc721" /> --------- Signed-off-by: Charlie Doern <cdoern@redhat.com> | ||
|  | fdb144f009 | revert: feat(ci): use @next branch from llama-stack-client (#3593) Reverts llamastack/llama-stack#3576 When I edit Stainless and codegen succeeds, the `next` branch is updated directly. It provides us no chance to see if there might be something unideal going on. If something is wrong, all CI will start breaking immediately. This is not ideal. I will likely create another staging branch `next-release` or something to accomodate the special workflow that Stainless requires. | ||
|  | 8dc9fd6844 | feat(ci): use @next branch from llama-stack-client (#3576) 
		
			Some checks failed
		
		
	 SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s Python Package Build Test / build (3.12) (push) Failing after 1s Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 4s Python Package Build Test / build (3.13) (push) Failing after 2s API Conformance Tests / check-schema-compatibility (push) Successful in 6s Vector IO Integration Tests / test-matrix (push) Failing after 4s Test External API and Providers / test-external (venv) (push) Failing after 3s Unit Tests / unit-tests (3.12) (push) Failing after 3s Unit Tests / unit-tests (3.13) (push) Failing after 4s UI Tests / ui-tests (22) (push) Successful in 39s Pre-commit / pre-commit (push) Successful in 1m16s When we update Stainless (editor changes), the `next` branch gets updated. Eventually when one decides on a release, you land changes into `main`. This is the Stainless workflow. This PR makes sure we follow that workflow by pulling from the `next` branch for our integration tests. | ||
|  | a8aa815b6a | feat(tests): migrate to global "setups" system for test configuration (#3390) This PR refactors the integration test system to use global "setups"
which provides better separation of concerns:
**suites = what to test, setups = how to configure.**
NOTE: if you naming suggestions, please provide feedback
Changes:
- New `tests/integration/setups.py` with global, reusable configurations
(ollama, vllm, gpt, claude)
- Modified `scripts/integration-tests.sh` options to match with the
underlying pytest options
    - Updated documentation to reflect the new global setup system
The main benefit is that setups can be reused across multiple suites
(e.g., use "gpt" with any suite) even though sometimes they could
specifically tailored for a suite (vision <> ollama-vision). It is now
easier to add new configurations without modifying existing suites.
Usage examples:
    - `pytest tests/integration --suite=responses --setup=gpt`
- `pytest tests/integration --suite=vision` # auto-selects
"ollama-vision" setup
    - `pytest tests/integration --suite=base --setup=vllm` | ||
|  | 47b640370e | feat(tests): introduce a test "suite" concept to encompass dirs, options (#3339) 
		
			Some checks failed
		
		
	 Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped Python Package Build Test / build (3.13) (push) Failing after 1s SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 4s Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 4s Vector IO Integration Tests / test-matrix (push) Failing after 4s Python Package Build Test / build (3.12) (push) Failing after 3s Test External API and Providers / test-external (venv) (push) Failing after 4s Unit Tests / unit-tests (3.12) (push) Failing after 4s Unit Tests / unit-tests (3.13) (push) Failing after 3s UI Tests / ui-tests (22) (push) Successful in 33s Pre-commit / pre-commit (push) Successful in 1m15s Our integration tests need to be 'grouped' because each group often needs a specific set of models it works with. We separated vision tests due to this, and we have a separate set of tests which test "Responses" API. This PR makes this system a bit more official so it is very easy to target these groups and apply all testing infrastructure towards all the groups (for example, record-replay) uniformly. There are three suites declared: - base - vision - responses Note that our CI currently runs the "base" and "vision" suites. You can use the `--suite` option when running pytest (or any of the testing scripts or workflows.) For example: ``` OLLAMA_URL=http://localhost:11434 \ pytest -s -v tests/integration/ --stack-config starter --suite vision ``` | ||
|  | eb07a0f86a | fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 
		
			Some checks failed
		
		
	 Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 21s Test Llama Stack Build / build-single-provider (push) Failing after 23s SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 28s Test Llama Stack Build / generate-matrix (push) Successful in 25s Python Package Build Test / build (3.13) (push) Failing after 25s Test Llama Stack Build / build-custom-container-distribution (push) Failing after 34s Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 37s Test External API and Providers / test-external (venv) (push) Failing after 33s Unit Tests / unit-tests (3.13) (push) Failing after 33s Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 38s Python Package Build Test / build (3.12) (push) Failing after 1m0s Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1m4s Unit Tests / unit-tests (3.12) (push) Failing after 59s Test Llama Stack Build / build (push) Failing after 50s Vector IO Integration Tests / test-matrix (push) Failing after 1m48s UI Tests / ui-tests (22) (push) Successful in 2m12s Pre-commit / pre-commit (push) Successful in 2m41s I started this PR trying to unbreak a newly broken test `test_agent_name`. This test was broken all along but did not show up because during testing we were pulling the "non-updated" llama stack client. See this comment: https://github.com/llamastack/llama-stack/pull/3119#discussion_r2270988205 While fixing this, I encountered a large amount of badness in our CI workflow definitions. - We weren't passing `LLAMA_STACK_DIR` or `LLAMA_STACK_CLIENT_DIR` overrides to `llama stack build` at all in some cases. - Even when we did, we used `uv run` liberally. The first thing `uv run` does is "syncs" the project environment. This means, it is going to undo any mutations we might have done ourselves. But we make many mutations in our CI runners to these environments. The most important of which is why `llama stack build` where we install distro dependencies. As a result, when you tried to run the integration tests, you would see old, strange versions. ## Test Plan Re-record using: ``` sh scripts/integration-tests.sh --stack-config ci-tests \ --provider ollama --test-pattern test_agent_name --inference-mode record ``` Then re-run with `--inference-mode replay`. But: Eventually, this test turned out to be quite flaky for telemetry reasons. I haven't investigated it for now and just disabled it sadly since we have a release to push out. | ||
|  | f4489eeb83 | fix(ci): simplify integration tests replay mode (#2997) We are going to split record and replay workflows completely to simplify the concurrency key design. We can add vision tests by just adding to our matrix. | ||
|  | 27d866795c | feat(ci): add support for running vision inference tests (#2972) This PR significantly refactors the Integration Tests workflow. The main goal behind the PR was to enable recording of vision tests which were never run as part of our CI ever before. During debugging, I ended up making several other changes refactoring and hopefully increasing the robustness of the workflow. After doing the experiments, I have updated the trigger event to be `pull_request_target` so this workflow can get write permissions by default but it will run with source code from the base (main) branch in the source repository only. If you do change the workflow, you'd need to experiment using the `workflow_dispatch` triggers. This should not be news to anyone using Github Actions (except me!) It is likely to be a little rocky though while I learn more about GitHub Actions, etc. Please be patient :) --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> |