mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
# What does this PR do? this replaces the static model listing for any provider using OpenAIMixin currently - - anthropic - azure openai - gemini - groq - llama-api - nvidia - openai - sambanova - tgi - vertexai - vllm - not changed: together has its own impl ## Test Plan - new unit tests - manual for llama-api, openai, groq, gemini ``` for provider in llama-openai-compat openai groq gemini; do uv run llama stack build --image-type venv --providers inference=remote::provider --run & uv run --with llama-stack-client llama-stack-client models list | grep Total ``` results (17 sep 2025): - llama-api: 4 - openai: 86 - groq: 21 - gemini: 66 closes #3467 |
||
---|---|---|
.. | ||
bedrock | ||
common | ||
datasetio | ||
inference | ||
kvstore | ||
memory | ||
responses | ||
scoring | ||
sqlstore | ||
telemetry | ||
tools | ||
vector_io | ||
__init__.py | ||
pagination.py | ||
scheduler.py |