llama-stack-mirror/llama_stack/providers
Ashwin Bharambe 185de61d8e
fix(openai_mixin): no yelling for model listing if API keys are not provided (#3826)
As indicated in the title. Our `starter` distribution enables all remote
providers _very intentionally_ because we believe it creates an easier,
more welcoming experience to new folks using the software. If we do
that, and then slam the logs with errors making them question their life
choices, it is not so good :)

Note that this fix is limited in scope. If you ever try to actually
instantiate the OpenAI client from a code path without an API key being
present, you deserve to fail hard.

## Test Plan

Run `llama stack run starter` with `OPENAI_API_KEY` set. No more wall of
text, just one message saying "listed 96 models".
2025-10-16 10:12:13 -07:00
..
inline fix(responses): fixes, re-record tests (#3820) 2025-10-15 16:37:42 -07:00
registry feat: Enable setting a default embedding model in the stack (#3803) 2025-10-14 18:25:13 -07:00
remote feat(gemini): Support gemini-embedding-001 and fix models/ prefix in metadata keys (#3813) 2025-10-15 12:22:10 -04:00
utils fix(openai_mixin): no yelling for model listing if API keys are not provided (#3826) 2025-10-16 10:12:13 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00