llama-stack-mirror/llama_stack/core
Charlie Doern 8b9af03a1b
fix: refresh log should be debug (#3720)
# What does this PR do?

when using a distro like starter where a bunch of providers are disabled
I should not see logs like:

```
         in the provider data header, e.g. x-llamastack-provider-data: {"groq_api_key": "<API_KEY>"}, or in the provider config.
WARNING  2025-10-07 08:38:52,117 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider sambanova: API key is not set. Please provide a valid
         API key in the provider data header, e.g. x-llamastack-provider-data: {"sambanova_api_key": "<API_KEY>"}, or in the provider config.
WARNING  2025-10-07 08:43:52,123 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider fireworks: Pass Fireworks API Key in the header
         X-LlamaStack-Provider-Data as { "fireworks_api_key": <your api key>}
WARNING  2025-10-07 08:43:52,126 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider together: Pass Together API Key in the header
         X-LlamaStack-Provider-Data as { "together_api_key": <your api key>}
WARNING  2025-10-07 08:43:52,129 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider openai: API key is not set. Please provide a valid API
         key in the provider data header, e.g. x-llamastack-provider-data: {"openai_api_key": "<API_KEY>"}, or in the provider config.
WARNING  2025-10-07 08:43:52,132 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider anthropic: API key is not set. Please provide a valid
         API key in the provider data header, e.g. x-llamastack-provider-data: {"anthropic_api_key": "<API_KEY>"}, or in the provider config.
WARNING  2025-10-07 08:43:52,136 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider gemini: API key is not set. Please provide a valid API
         key in the provider data header, e.g. x-llamastack-provider-data: {"gemini_api_key": "<API_KEY>"}, or in the provider config.
WARNING  2025-10-07 08:43:52,139 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider groq: API key is not set. Please provide a valid API key
         in the provider data header, e.g. x-llamastack-provider-data: {"groq_api_key": "<API_KEY>"}, or in the provider config.
WARNING  2025-10-07 08:43:52,142 llama_stack.core.routing_tables.models:36 core::routing_tables: Model refresh failed for provider sambanova: API key is not set. Please provide a valid
         API key in the provider data header, e.g. x-llamastack-provider-data: {"sambanova_api_key": "<API_KEY>"}, or in the provider config.
^CINFO     2025-10-07 08:46:11,996 llama_stack.core.utils.exec:75 core:
```

as WARNING. Switch to Debug.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-10-07 09:04:07 -04:00
..
access_control chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
conversations feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
prompts feat: Adding OpenAI Prompts API (#3319) 2025-09-08 11:05:13 -04:00
routers chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00
routing_tables fix: refresh log should be debug (#3720) 2025-10-07 09:04:07 -04:00
server chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
store feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
ui feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
build.py feat(distro): no huggingface provider for starter (#3258) 2025-08-26 14:06:36 -07:00
build_container.sh chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00
build_venv.sh fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 2025-08-18 17:02:24 -07:00
client.py feat: introduce API leveling, post_training, eval to v1alpha (#3449) 2025-09-26 16:18:07 +02:00
common.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
configure.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
datatypes.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
distribution.py feat: allow for multiple external provider specs (#3341) 2025-10-06 15:26:38 +02:00
external.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inspect.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
library_client.py feat(api): add extra_body parameter support with shields example (#3670) 2025-10-03 13:25:09 -07:00
providers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
request_headers.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
resolver.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
stack.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
start_stack.sh chore: use uvicorn to start llama stack server everywhere (#3625) 2025-10-06 14:27:40 +02:00