llama-stack-mirror/tests
Matthew Farrellee a3e249807b
chore: remove vision model URL workarounds and simplify client creation (#2775)
The vision models are now available at the standard URL, so the
workaround code has been removed. This also simplifies the codebase by
eliminating the need for per-model client caching.

- Remove special URL handling for meta/llama-3.2-11b/90b-vision-instruct
models
- Convert _get_client method to _client property for cleaner API
- Remove unnecessary lru_cache decorator and functools import
- Simplify client creation logic to use single base URL for all models
2025-07-16 07:10:04 -07:00
..
client-sdk/post_training feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
common feat(responses): implement full multi-turn support (#2295) 2025-06-02 15:35:49 -07:00
external-provider/llama-stack-provider-ollama refactor: set proper name for embedding all-minilm:l6-v2 and update to use "starter" in detailed_tutorial (#2627) 2025-07-06 09:07:37 +05:30
integration chore: Adding OpenAI Vector Stores Files API compatibility for PGVector (#2755) 2025-07-15 15:46:49 -04:00
unit chore: remove vision model URL workarounds and simplify client creation (#2775) 2025-07-16 07:10:04 -07:00
verifications fix(ollama): Download remote image URLs for Ollama (#2551) 2025-06-30 20:36:11 +05:30
__init__.py refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00
Containerfile ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
README.md docs: revamp testing documentation (#2155) 2025-05-13 11:28:29 -07:00

Llama Stack Tests

Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.

Testing Type Details
Unit unit/README.md
Integration integration/README.md
Verification verifications/README.md