llama-stack-mirror/tests/unit/providers/utils
Matthew Farrellee 521865c388
feat: include all models from provider's /v1/models (#3471)
# What does this PR do?

this replaces the static model listing for any provider using
OpenAIMixin

currently -
 - anthropic
 - azure openai
 - gemini
 - groq
 - llama-api
 - nvidia
 - openai
 - sambanova
 - tgi
 - vertexai
 - vllm
 - not changed: together has its own impl

## Test Plan

 - new unit tests
 - manual for llama-api, openai, groq, gemini

```
for provider in llama-openai-compat openai groq gemini; do
   uv run llama stack build --image-type venv --providers inference=remote::provider --run &
   uv run --with llama-stack-client llama-stack-client models list | grep Total
```

results (17 sep 2025):
 - llama-api: 4
 - openai: 86
 - groq: 21
 - gemini: 66


closes #3467
2025-09-18 05:17:11 -04:00
..
inference feat: include all models from provider's /v1/models (#3471) 2025-09-18 05:17:11 -04:00
memory chore: Updating documentation, adding exception handling for Vector Stores in RAG Tool, more tests on migration, and migrate off of inference_api for context_retriever for RAG (#3367) 2025-09-11 14:20:11 +02:00
__init__.py fix: add check for interleavedContent (#1973) 2025-05-06 09:55:07 -07:00
test_model_registry.py fix: Fix unit tests CI and failing tests (#2928) 2025-07-28 10:07:26 -07:00
test_scheduler.py chore: default to pytest asyncio-mode=auto (#2730) 2025-07-11 13:00:24 -07:00