mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-18 02:42:31 +00:00
The vision models are now available at the standard URL, so the workaround code has been removed. This also simplifies the codebase by eliminating the need for per-model client caching. - Remove special URL handling for meta/llama-3.2-11b/90b-vision-instruct models - Convert _get_client method to _client property for cleaner API - Remove unnecessary lru_cache decorator and functools import - Simplify client creation logic to use single base URL for all models |
||
---|---|---|
.. | ||
client-sdk/post_training | ||
common | ||
external-provider/llama-stack-provider-ollama | ||
integration | ||
unit | ||
verifications | ||
__init__.py | ||
Containerfile | ||
README.md |
Llama Stack Tests
Llama Stack has multiple layers of testing done to ensure continuous functionality and prevent regressions to the codebase.
Testing Type | Details |
---|---|
Unit | unit/README.md |
Integration | integration/README.md |
Verification | verifications/README.md |