mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-24 00:47:00 +00:00
Propagate test IDs from client to server via HTTP headers to maintain proper test isolation when running with server-based stack configs. Without this, recorded/replayed inference requests in server mode would leak across tests. Changes: - Patch client _prepare_request to inject test ID into provider data header - Sync test context from provider data on server side before storage operations - Set LLAMA_STACK_TEST_STACK_CONFIG_TYPE env var based on stack config - Configure console width for cleaner log output in CI - Add SQLITE_STORE_DIR temp directory for test data isolation |
||
|---|---|---|
| .. | ||
| recordings | ||
| __init__.py | ||
| dog.png | ||
| test_openai_completion.py | ||
| test_openai_embeddings.py | ||
| test_openai_vision_inference.py | ||
| test_tools_with_schemas.py | ||
| test_vision_inference.py | ||
| vision_test_1.jpg | ||
| vision_test_2.jpg | ||
| vision_test_3.jpg | ||