llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Matthew Farrellee a3e249807b
chore: remove vision model URL workarounds and simplify client creation (#2775)
The vision models are now available at the standard URL, so the
workaround code has been removed. This also simplifies the codebase by
eliminating the need for per-model client caching.

- Remove special URL handling for meta/llama-3.2-11b/90b-vision-instruct
models
- Convert _get_client method to _client property for cleaner API
- Remove unnecessary lru_cache decorator and functools import
- Simplify client creation logic to use single base URL for all models
2025-07-16 07:10:04 -07:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
models.py ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
NVIDIA.md docs: Add NVIDIA platform distro docs (#1971) 2025-04-17 05:54:30 -07:00
nvidia.py chore: remove vision model URL workarounds and simplify client creation (#2775) 2025-07-16 07:10:04 -07:00
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00