mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-10 13:28:40 +00:00
this replaces the static model listing for any provider using OpenAIMixin test with - - new unit tests - manual for llama-api, openai, groq, gemini ``` for provider in llama-openai-compat openai groq gemini; do uv run llama stack build --image-type venv --providers inference=remote::provider --run & uv run --with llama-stack-client llama-stack-client models list | grep Total ``` results (17 sep 2025): - llama-api: 4 - openai: 86 - groq: 21 - gemini: 66 |
||
---|---|---|
.. | ||
bedrock | ||
test_inference_client_caching.py | ||
test_litellm_openai_mixin.py | ||
test_openai_base_url_config.py | ||
test_remote_vllm.py |