|
Some checks failed
Integration Tests / test-matrix (http, 3.10, post_training) (push) Failing after 10s
Integration Tests / test-matrix (http, 3.11, inference) (push) Failing after 10s
Integration Tests / test-matrix (http, 3.12, scoring) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.10, providers) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.10, scoring) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.10, post_training) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.12, agents) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.10, datasets) (push) Failing after 20s
Integration Tests / test-matrix (http, 3.10, tool_runtime) (push) Failing after 19s
Integration Tests / test-matrix (http, 3.11, providers) (push) Failing after 18s
Integration Tests / test-matrix (library, 3.10, datasets) (push) Failing after 14s
Integration Tests / test-matrix (library, 3.10, tool_runtime) (push) Failing after 8s
Integration Tests / test-matrix (http, 3.12, inspect) (push) Failing after 16s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 35s
Integration Tests / test-matrix (http, 3.10, scoring) (push) Failing after 27s
Integration Tests / test-matrix (library, 3.11, datasets) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.12, inference) (push) Failing after 25s
Integration Tests / test-matrix (http, 3.10, inference) (push) Failing after 43s
Integration Tests / test-matrix (library, 3.11, inference) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.10, agents) (push) Failing after 28s
Integration Tests / test-matrix (library, 3.11, agents) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.10, inspect) (push) Failing after 45s
Integration Tests / test-matrix (http, 3.11, post_training) (push) Failing after 26s
Integration Tests / test-matrix (http, 3.11, scoring) (push) Failing after 40s
Integration Tests / test-matrix (library, 3.10, agents) (push) Failing after 23s
Integration Tests / test-matrix (library, 3.11, providers) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.11, post_training) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.11, tool_runtime) (push) Failing after 41s
Integration Tests / test-matrix (library, 3.11, scoring) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.10, inspect) (push) Failing after 39s
Integration Tests / test-matrix (http, 3.12, providers) (push) Failing after 41s
Integration Tests / test-matrix (library, 3.11, tool_runtime) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.11, inspect) (push) Failing after 7s
Integration Tests / test-matrix (http, 3.12, datasets) (push) Failing after 42s
Integration Tests / test-matrix (library, 3.10, inference) (push) Failing after 38s
Integration Tests / test-matrix (http, 3.10, providers) (push) Failing after 46s
Integration Tests / test-matrix (http, 3.11, inspect) (push) Failing after 44s
Integration Tests / test-matrix (http, 3.11, agents) (push) Failing after 42s
Integration Tests / test-matrix (http, 3.11, datasets) (push) Failing after 43s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 9s
Integration Tests / test-matrix (http, 3.12, tool_runtime) (push) Failing after 40s
Integration Tests / test-matrix (http, 3.12, post_training) (push) Failing after 39s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 15s
Test External Providers / test-external-providers (venv) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 15s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 14s
Unit Tests / unit-tests (3.12) (push) Failing after 9s
Unit Tests / unit-tests (3.10) (push) Failing after 1m3s
Unit Tests / unit-tests (3.11) (push) Failing after 1m12s
Unit Tests / unit-tests (3.13) (push) Failing after 1m10s
Pre-commit / pre-commit (push) Successful in 2m23s
# What does this PR do? Fixes provider weaviate `query_vector` function for when the distance between the query embedding and an embedding within the vector db is 0 (identical vectors). Catches `ZeroDivisionError` and then sets `score` to infinity, which represent maximum similarity. <!-- If resolving an issue, uncomment and update the line below --> Closes [#2381] ## Test Plan Checkout this PR Execute this code and there will no longer be a `ZeroDivisionError` exception ``` from llama_stack_client import LlamaStackClient base_url = "http://localhost:8321" client = LlamaStackClient(base_url=base_url) models = client.models.list() embedding_model = ( em := next(m for m in models if m.model_type == "embedding") ).identifier embedding_dimension = 384 _ = client.vector_dbs.register( vector_db_id="foo_db", embedding_model=embedding_model, embedding_dimension=embedding_dimension, provider_id="weaviate", ) chunk = { "content": "foo", "mime_type": "text/plain", "metadata": { "document_id": "foo-id" } } client.vector_io.insert(vector_db_id="foo_db", chunks=[chunk]) client.vector_io.query(vector_db_id="foo_db", query="foo") ``` |
||
|---|---|---|
| .. | ||
| agents | ||
| datasets | ||
| eval | ||
| files | ||
| fixtures | ||
| inference | ||
| inspect | ||
| post_training | ||
| providers | ||
| safety | ||
| scoring | ||
| telemetry | ||
| test_cases | ||
| tool_runtime | ||
| tools | ||
| vector_io | ||
| __init__.py | ||
| conftest.py | ||
| README.md | ||
Llama Stack Integration Tests
We use pytest for parameterizing and running tests. You can see all options with:
cd tests/integration
# this will show a long list of options, look for "Custom options:"
pytest --help
Here are the most important options:
--stack-config: specify the stack config to use. You have three ways to point to a stack:- a URL which points to a Llama Stack distribution server
- a template (e.g.,
fireworks,together) or a path to arun.yamlfile - a comma-separated list of api=provider pairs, e.g.
inference=fireworks,safety=llama-guard,agents=meta-reference. This is most useful for testing a single API surface.
--env: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.
Model parameters can be influenced by the following options:
--text-model: comma-separated list of text models.--vision-model: comma-separated list of vision models.--embedding-model: comma-separated list of embedding models.--safety-shield: comma-separated list of safety shields.--judge-model: comma-separated list of judge models.--embedding-dimension: output dimensionality of the embedding model to use for testing. Default: 384
Each of these are comma-separated lists and can be used to generate multiple parameter combinations. Note that tests will be skipped if no model is specified.
Experimental, under development, options:
--record-responses: record new API responses instead of using cached ones
Examples
Run all text inference tests with the together distribution:
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=together \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Run all text inference tests with the together distribution and meta-llama/Llama-3.1-8B-Instruct:
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=together \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Running all inference tests for a number of models:
TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct
VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct
EMBEDDING_MODELS=all-MiniLM-L6-v2
export TOGETHER_API_KEY=<together_api_key>
pytest -s -v tests/integration/inference/ \
--stack-config=together \
--text-model=$TEXT_MODELS \
--vision-model=$VISION_MODELS \
--embedding-model=$EMBEDDING_MODELS
Same thing but instead of using the distribution, use an adhoc stack with just one provider (fireworks for inference):
export FIREWORKS_API_KEY=<fireworks_api_key>
pytest -s -v tests/integration/inference/ \
--stack-config=inference=fireworks \
--text-model=$TEXT_MODELS \
--vision-model=$VISION_MODELS \
--embedding-model=$EMBEDDING_MODELS
Running Vector IO tests for a number of embedding models:
EMBEDDING_MODELS=all-MiniLM-L6-v2
pytest -s -v tests/integration/vector_io/ \
--stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
--embedding-model=$EMBEDDING_MODELS