llama-stack-mirror/llama_stack/providers/remote
Matthew Farrellee 8cc3fe7669 chore: remove vision model URL workarounds and simplify client creation
The vision models are now available at the standard URL, so the workaround
code has been removed. This also simplifies the codebase by eliminating
the need for per-model client caching.

- Remove special URL handling for meta/llama-3.2-11b/90b-vision-instruct models
- Convert _get_client method to _client property for cleaner API
- Remove unnecessary lru_cache decorator and functools import
- Simplify client creation logic to use single base URL for all models
2025-07-16 05:28:58 -04:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
eval refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
inference chore: remove vision model URL workarounds and simplify client creation 2025-07-16 05:28:58 -04:00
post_training fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
safety fix: sambanova shields and model validation (#2693) 2025-07-11 16:29:15 -04:00
tool_runtime fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
vector_io chore: Adding OpenAI Vector Stores Files API compatibility for PGVector (#2755) 2025-07-15 15:46:49 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00