llama-stack-mirror/llama_stack/providers
Matthew Farrellee 8cc3fe7669 chore: remove vision model URL workarounds and simplify client creation
The vision models are now available at the standard URL, so the workaround
code has been removed. This also simplifies the codebase by eliminating
the need for per-model client caching.

- Remove special URL handling for meta/llama-3.2-11b/90b-vision-instruct models
- Convert _get_client method to _client property for cleaner API
- Remove unnecessary lru_cache decorator and functools import
- Simplify client creation logic to use single base URL for all models
2025-07-16 05:28:58 -04:00
..
inline chore: Move vector store kvstore implementation into openai_vector_store_mixin.py (#2748) 2025-07-14 18:10:35 -04:00
registry fix: only load mcp when enabled in tool_group (#2621) 2025-07-04 20:27:05 +05:30
remote chore: remove vision model URL workarounds and simplify client creation 2025-07-16 05:28:58 -04:00
utils fix: Fix /vector-stores/create API when vector store with duplicate name (#2617) 2025-07-15 11:24:41 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00