llama-stack-mirror/llama_stack/providers/utils/inference
Ashwin Bharambe 185de61d8e
fix(openai_mixin): no yelling for model listing if API keys are not provided (#3826)
As indicated in the title. Our `starter` distribution enables all remote
providers _very intentionally_ because we believe it creates an easier,
more welcoming experience to new folks using the software. If we do
that, and then slam the logs with errors making them question their life
choices, it is not so good :)

Note that this fix is limited in scope. If you ever try to actually
instantiate the OpenAI client from a code path without an API key being
present, you deserve to fail hard.

## Test Plan

Run `llama stack run starter` with `OPENAI_API_KEY` set. No more wall of
text, just one message saying "listed 96 models".
2025-10-16 10:12:13 -07:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py refactor: replace default all-MiniLM-L6-v2 embedding model by nomic-embed-text-v1.5 in Llama Stack (#3183) 2025-10-14 10:44:20 -04:00
inference_store.py chore: fix/add logging categories (#3658) 2025-10-02 13:10:13 -07:00
litellm_openai_mixin.py feat(api)!: support extra_body to embeddings and vector_stores APIs (#3794) 2025-10-12 19:01:52 -07:00
model_registry.py feat: use SecretStr for inference provider auth credentials (#3724) 2025-10-10 07:32:50 -07:00
openai_compat.py fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
openai_mixin.py fix(openai_mixin): no yelling for model listing if API keys are not provided (#3826) 2025-10-16 10:12:13 -07:00
prompt_adapter.py chore!: Safety api refactoring to use OpenAIMessageParam (#3796) 2025-10-12 08:01:00 -07:00