Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 2s
Python Package Build Test / build (3.13) (push) Failing after 0s
Python Package Build Test / build (3.12) (push) Failing after 2s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 5s
Vector IO Integration Tests / test-matrix (push) Failing after 4s
API Conformance Tests / check-schema-compatibility (push) Successful in 9s
Unit Tests / unit-tests (3.12) (push) Failing after 3s
Test External API and Providers / test-external (venv) (push) Failing after 5s
Unit Tests / unit-tests (3.13) (push) Failing after 3s
UI Tests / ui-tests (22) (push) Successful in 40s
Pre-commit / pre-commit (push) Successful in 1m28s
# What does this PR do? Add Open AI Compatible vector store file batches api. This functionality is needed to attach many files to a vector store as a batch. https://github.com/llamastack/llama-stack/issues/3533 API Stubs have been merged https://github.com/llamastack/llama-stack/pull/3615 Adds persistence for file batches as discussed in diff https://github.com/llamastack/llama-stack/pull/3544 (Used claude code for generation and reviewed by me) ## Test Plan 1. Unit tests pass 2. Also verified the cc-vec integration with LLamaStackClient works with the file batches api. https://github.com/raghotham/cc-vec 2. Integration tests pass |
||
---|---|---|
.. | ||
responses | ||
README.md |
Test Recording System
This directory contains recorded inference API responses used for deterministic testing without requiring live API access.
Structure
responses/
- JSON files containing request/response pairs for inference operations
Recording Format
Each JSON file contains:
request
- The normalized request parameters (method, endpoint, body)response
- The response body (serialized from Pydantic models)
Normalization
To reduce noise in git diffs, the recording system automatically normalizes fields that vary between runs but don't affect test behavior:
OpenAI-style responses
id
- Deterministic hash based on request:rec-{request_hash[:12]}
created
- Normalized to epoch:0
Ollama-style responses
created_at
- Normalized to:"1970-01-01T00:00:00.000000Z"
total_duration
- Normalized to:0
load_duration
- Normalized to:0
prompt_eval_duration
- Normalized to:0
eval_duration
- Normalized to:0
These normalizations ensure that re-recording tests produces minimal git diffs, making it easier to review actual changes to test behavior.
Usage
Replay mode (default)
Responses are replayed from recordings:
LLAMA_STACK_TEST_INFERENCE_MODE=replay pytest tests/integration/
Record-if-missing mode (recommended for adding new tests)
Records only when no recording exists, otherwise replays. Use this for iterative development:
LLAMA_STACK_TEST_INFERENCE_MODE=record-if-missing pytest tests/integration/
Recording mode
Force-records all API interactions, overwriting existing recordings. Use with caution:
LLAMA_STACK_TEST_INFERENCE_MODE=record pytest tests/integration/
Live mode
Skip recordings entirely and use live APIs:
LLAMA_STACK_TEST_INFERENCE_MODE=live pytest tests/integration/
Re-normalizing Existing Recordings
If you need to apply normalization to existing recordings (e.g., after updating the normalization logic):
python scripts/normalize_recordings.py
Use --dry-run
to preview changes without modifying files.