mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
The vision models are now available at the standard URL, so the workaround code has been removed. This also simplifies the codebase by eliminating the need for per-model client caching. - Remove special URL handling for meta/llama-3.2-11b/90b-vision-instruct models - Convert _get_client method to _client property for cleaner API - Remove unnecessary lru_cache decorator and functools import - Simplify client creation logic to use single base URL for all models |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| config.py | ||
| models.py | ||
| NVIDIA.md | ||
| nvidia.py | ||
| openai_utils.py | ||
| utils.py | ||