test: Implement vector store search test (#3001)
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 4s
Integration Tests (Replay) / discover-tests (push) Successful in 3s
Vector IO Integration Tests / test-matrix (3.12, remote::chromadb) (push) Failing after 13s
Vector IO Integration Tests / test-matrix (3.13, inline::milvus) (push) Failing after 11s
Test Llama Stack Build / generate-matrix (push) Successful in 8s
Vector IO Integration Tests / test-matrix (3.12, remote::pgvector) (push) Failing after 13s
Python Package Build Test / build (3.12) (push) Failing after 7s
Vector IO Integration Tests / test-matrix (3.13, remote::qdrant) (push) Failing after 12s
Vector IO Integration Tests / test-matrix (3.12, inline::faiss) (push) Failing after 16s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 18s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 9s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 8s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Vector IO Integration Tests / test-matrix (3.12, inline::milvus) (push) Failing after 16s
Vector IO Integration Tests / test-matrix (3.12, remote::qdrant) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.13, remote::weaviate) (push) Failing after 13s
Vector IO Integration Tests / test-matrix (3.13, remote::chromadb) (push) Failing after 14s
Vector IO Integration Tests / test-matrix (3.12, remote::weaviate) (push) Failing after 14s
Python Package Build Test / build (3.13) (push) Failing after 4s
Vector IO Integration Tests / test-matrix (3.13, inline::faiss) (push) Failing after 17s
Test Llama Stack Build / build-single-provider (push) Failing after 14s
Vector IO Integration Tests / test-matrix (3.12, inline::sqlite-vec) (push) Failing after 20s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 22s
Vector IO Integration Tests / test-matrix (3.13, remote::pgvector) (push) Failing after 17s
Unit Tests / unit-tests (3.12) (push) Failing after 5s
Test Llama Stack Build / build (push) Failing after 5s
Test External API and Providers / test-external (venv) (push) Failing after 7s
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 5s
Unit Tests / unit-tests (3.13) (push) Failing after 8s
Vector IO Integration Tests / test-matrix (3.13, inline::sqlite-vec) (push) Failing after 45s
Update ReadTheDocs / update-readthedocs (push) Failing after 35s
Pre-commit / pre-commit (push) Successful in 1m30s

# What does this PR do?
Implement vector store search test

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->

## Test Plan
```
pytest tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes --stack-config=http://localhost:8321 --embedding-model=all-MiniLM-L6-v2 -v
```

Signed-off-by: Varsha Prasad Narsing <varshaprasad96@gmail.com>
This commit is contained in:
Varsha 2025-08-02 15:57:38 -07:00 committed by GitHub
parent 3c2aee610d
commit dbfc15123e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -38,6 +38,37 @@ def skip_if_provider_doesnt_support_openai_vector_stores(client_with_models):
pytest.skip("OpenAI vector stores are not supported by any provider") pytest.skip("OpenAI vector stores are not supported by any provider")
def skip_if_provider_doesnt_support_openai_vector_stores_search(client_with_models, search_mode):
vector_io_providers = [p for p in client_with_models.providers.list() if p.api == "vector_io"]
search_mode_support = {
"vector": [
"inline::faiss",
"inline::sqlite-vec",
"inline::milvus",
"inline::chromadb",
"inline::qdrant",
"remote::pgvector",
"remote::chromadb",
"remote::weaviate",
"remote::qdrant",
],
"keyword": [
"inline::sqlite-vec",
],
"hybrid": [
"inline::sqlite-vec",
],
}
supported_providers = search_mode_support.get(search_mode, [])
for p in vector_io_providers:
if p.provider_type in supported_providers:
return
pytest.skip(
f"Search mode '{search_mode}' is not supported by any available provider. "
f"Supported providers for '{search_mode}': {supported_providers}"
)
@pytest.fixture @pytest.fixture
def openai_client(client_with_models): def openai_client(client_with_models):
base_url = f"{client_with_models.base_url}/v1/openai/v1" base_url = f"{client_with_models.base_url}/v1/openai/v1"
@ -865,21 +896,26 @@ def test_create_vector_store_files_duplicate_vector_store_name(compat_client_wit
assert len(vector_stores_list_post_delete.data) == 1 assert len(vector_stores_list_post_delete.data) == 1
@pytest.mark.skip(reason="Client library needs to be scaffolded to support search_mode parameter") @pytest.mark.parametrize("search_mode", ["vector", "keyword", "hybrid"])
def test_openai_vector_store_search_modes(): def test_openai_vector_store_search_modes(llama_stack_client, client_with_models, sample_chunks, search_mode):
"""Test OpenAI vector store search with different search modes. skip_if_provider_doesnt_support_openai_vector_stores(client_with_models)
skip_if_provider_doesnt_support_openai_vector_stores_search(client_with_models, search_mode)
This test is skipped because the client library vector_store = llama_stack_client.vector_stores.create(
needs to be regenerated from the updated OpenAPI spec to support the name=f"search_mode_test_{search_mode}",
search_mode parameter. Once the client library is updated, this test metadata={"purpose": "search_mode_testing"},
can be enabled to verify: )
- vector search mode (default)
- keyword search mode client_with_models.vector_io.insert(
- hybrid search mode vector_db_id=vector_store.id,
- invalid search mode validation chunks=sample_chunks,
""" )
# TODO: Enable this test once llama_stack_client is updated to support search_mode query = "Python programming language"
# The server-side implementation is complete but the client
# library needs to be updated: search_response = llama_stack_client.vector_stores.search(
# https://github.com/meta-llama/llama-stack-client-python/blob/52c0b5d23e9ae67ceb09d755143d436f38c20547/src/llama_stack_client/resources/vector_stores/vector_stores.py#L314 vector_store_id=vector_store.id,
pass query=query,
max_num_results=4,
search_mode=search_mode,
)
assert search_response is not None