llama-stack-mirror/tests/unit
Francisco Arceo 83c89265e0
Some checks failed
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 8s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 9s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, vector_io) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.13, inference) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.13, inspect) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.13, post_training) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 18s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 23s
Integration Tests / test-matrix (library, 3.13, agents) (push) Failing after 21s
Integration Tests / test-matrix (library, 3.13, datasets) (push) Failing after 19s
Integration Tests / test-matrix (library, 3.13, providers) (push) Failing after 14s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 27s
Integration Tests / test-matrix (library, 3.13, scoring) (push) Failing after 12s
Integration Tests / test-matrix (server, 3.12, agents) (push) Failing after 13s
Integration Tests / test-matrix (library, 3.13, tool_runtime) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.13, vector_io) (push) Failing after 8s
Integration Tests / test-matrix (server, 3.12, datasets) (push) Failing after 13s
Integration Tests / test-matrix (server, 3.12, providers) (push) Failing after 9s
Integration Tests / test-matrix (server, 3.12, inference) (push) Failing after 13s
Integration Tests / test-matrix (server, 3.12, inspect) (push) Failing after 15s
Integration Tests / test-matrix (server, 3.12, post_training) (push) Failing after 13s
Integration Tests / test-matrix (server, 3.12, scoring) (push) Failing after 11s
Integration Tests / test-matrix (server, 3.12, tool_runtime) (push) Failing after 9s
Integration Tests / test-matrix (server, 3.12, vector_io) (push) Failing after 11s
Integration Tests / test-matrix (server, 3.13, inspect) (push) Failing after 7s
Integration Tests / test-matrix (server, 3.13, agents) (push) Failing after 13s
Integration Tests / test-matrix (server, 3.13, inference) (push) Failing after 9s
Integration Tests / test-matrix (server, 3.13, datasets) (push) Failing after 11s
Integration Tests / test-matrix (server, 3.13, post_training) (push) Failing after 7s
Integration Tests / test-matrix (server, 3.13, providers) (push) Failing after 5s
Integration Tests / test-matrix (server, 3.13, scoring) (push) Failing after 5s
Integration Tests / test-matrix (server, 3.13, tool_runtime) (push) Failing after 4s
Integration Tests / test-matrix (server, 3.13, vector_io) (push) Failing after 13s
Vector IO Integration Tests / test-matrix (3.12, inline::faiss) (push) Failing after 14s
Vector IO Integration Tests / test-matrix (3.12, inline::milvus) (push) Failing after 12s
Vector IO Integration Tests / test-matrix (3.12, inline::sqlite-vec) (push) Failing after 12s
Vector IO Integration Tests / test-matrix (3.12, remote::chromadb) (push) Failing after 10s
Vector IO Integration Tests / test-matrix (3.12, remote::pgvector) (push) Failing after 7s
Vector IO Integration Tests / test-matrix (3.13, inline::faiss) (push) Failing after 6s
Vector IO Integration Tests / test-matrix (3.13, inline::milvus) (push) Failing after 5s
Vector IO Integration Tests / test-matrix (3.13, inline::sqlite-vec) (push) Failing after 5s
Vector IO Integration Tests / test-matrix (3.13, remote::chromadb) (push) Failing after 6s
Vector IO Integration Tests / test-matrix (3.13, remote::pgvector) (push) Failing after 5s
Test Llama Stack Build / generate-matrix (push) Successful in 36s
Test Llama Stack Build / build-single-provider (push) Failing after 36s
Python Package Build Test / build (3.13) (push) Failing after 2s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 36s
Test External Providers / test-external-providers (venv) (push) Failing after 4s
Test Llama Stack Build / build (push) Failing after 3s
Update ReadTheDocs / update-readthedocs (push) Failing after 5s
Unit Tests / unit-tests (3.12) (push) Failing after 8s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 45s
Python Package Build Test / build (3.12) (push) Failing after 17s
Unit Tests / unit-tests (3.13) (push) Failing after 18s
Pre-commit / pre-commit (push) Successful in 1m35s
chore: Adding unit tests for Milvus and OpenAI compatibility (#2640)
# What does this PR do?
- Enabling Unit tests for Milvus to start to test OpenAI compatibility
and fixing a few bugs.
- Also fixed an inconsistency in the Milvus config between remote and
inline.
- Added pymilvus to extras for testing in CI

I'm going to refactor this later to include the other inline providers
so that we can catch issues sooner.

I have another PR where I've been testing to find other bugs in the
implementation (and required changes drafted here:
https://github.com/meta-llama/llama-stack/pull/2617).

## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-07-08 00:50:16 -07:00
..
cli chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
distribution chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
files feat: support pagination in inference/responses stores (#2397) 2025-06-16 22:43:35 -07:00
models chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
providers chore: Adding unit tests for Milvus and OpenAI compatibility (#2640) 2025-07-08 00:50:16 -07:00
rag feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
registry feat: fine grained access control policy (#2264) 2025-06-03 14:51:12 -07:00
server fix: AccessDeniedError leads to HTTP 500 instead of error 403 (#2595) 2025-07-03 10:50:49 -07:00
utils fix: authorized sql store with postgres (#2641) 2025-07-07 19:36:34 -07:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
fixtures.py fix: AccessDeniedError leads to HTTP 500 instead of error 403 (#2595) 2025-07-03 10:50:49 -07:00
README.md chore: bump python supported version to 3.12 (#2475) 2025-06-24 09:22:04 +05:30

Llama Stack Unit Tests

You can run the unit tests by running:

source .venv/bin/activate
./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh