llama-stack-mirror/llama_stack/core
Charlie Doern 090acfc458 fix: better error message when db is out of date
currently if you

1. `export OLLAMA_URL=http://localhost:11434`
2. `llama stack run --image-type venv starter`
3. do some chat completions successfully
4. kill the server
5. unset OLLAMA_URL
6. `llama stack run --image-type venv starter`
7. do some more chat completions

you get errors like:

```
           File "/Users/charliedoern/projects/Documents/llama-stack/llama_stack/core/routing_tables/models.py", line 66, in get_provider_impl
             return self.impls_by_provider_id
                    ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
         KeyError: 'ollama'
```

and in the client:
```
INFO:httpx:HTTP Request: POST http://localhost:8321/v1/openai/v1/chat/completions "HTTP/1.1 500 Internal Server Error"
INFO:llama_stack_client._base_client:Retrying request to /v1/openai/v1/chat/completions in 0.482010 seconds
INFO:httpx:HTTP Request: POST http://localhost:8321/v1/openai/v1/chat/completions "HTTP/1.1 500 Internal Server Error"
INFO:llama_stack_client._base_client:Retrying request to /v1/openai/v1/chat/completions in 0.883701 seconds
INFO:httpx:HTTP Request: POST http://localhost:8321/v1/openai/v1/chat/completions "HTTP/1.1 500 Internal Server Error"
╭───────────────────────────────────────────────────────────────────────────────────────────────╮
│ Failed to inference chat-completion                                                           │
│                                                                                               │
│ Error Type: InternalServerError                                                               │
│ Details: Error code: 500 - {'detail': 'Internal server error: An unexpected error occurred.'} │
╰───────────────────────────────────────────────────────────────────────────────────────────────╯
```
now you get

```
           File "/Users/charliedoern/projects/Documents/llama-stack/llama_stack/core/routing_tables/models.py", line 69, in get_provider_impl
             raise ValueError(
         ValueError: Provider ID not found in currently running providers. Usually this indicates that your registry.db is out of date. Please ensure
         that the databases associated with your distro are not out of date.
INFO     2025-08-12 16:07:40,677 console_span_processor:62 telemetry:  20:07:40.628 [INFO] ::1:55414 - "POST /v1/openai/v1/chat/completions HTTP/1.1"
         400
```

and in the client:

```
 Failed to inference chat-completion                                                                                                                                                                                                                          │
│                                                                                                                                                                                                                                                              │
│ Error Type: BadRequestError                                                                                                                                                                                                                                  │
│ Details: Error code: 400 - {'detail': 'Invalid value: Provider ID not found in currently running providers. Usually this indicates that your registry.db is out of date. Please ensure that the databases associated with your distro are not out of date.'} │
```

more descriptive and give the user a course of action.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-08-13 11:36:33 -04:00
..
access_control chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
routers refactor: standardize InferenceRouter model handling (#2965) 2025-08-12 04:20:39 -06:00
routing_tables fix: better error message when db is out of date 2025-08-13 11:36:33 -04:00
server chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
store chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
ui chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
utils chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
build.py chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
build_conda_env.sh chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
build_container.sh chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
build_venv.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
client.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
common.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
configure.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
datatypes.py refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
distribution.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
external.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inspect.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
library_client.py chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
providers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
request_headers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
resolver.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
stack.py chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
start_stack.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00