llama-stack-mirror/llama_stack/apis
Rohan Awhad 7cb5d3c60f
chore: standardize unsupported model error #2517 (#2518)
# What does this PR do?

- llama_stack/exceptions.py: Add UnsupportedModelError class
- remote inference ollama.py and utils/inference/model_registry.py:
Changed ValueError in favor of UnsupportedModelError
- utils/inference/litellm_openai_mixin.py: remove `register_model`
function implementation from `LiteLLMOpenAIMixin` class. Now uses the
parent class `ModelRegistryHelper`'s function implementation

Closes #2517


## Test Plan


1. Create a new `test_run_openai.yaml` and paste the following config in
it:

```yaml
version: '2'
image_name: test-image
apis:
- inference
providers:
  inference:
  - provider_id: openai
    provider_type: remote::openai
    config:
      max_tokens: 8192
models:
- metadata: {}
  model_id: "non-existent-model"
  provider_id: openai
  model_type: llm
server:
  port: 8321
```

And run the server with:
```bash
uv run llama stack run test_run_openai.yaml
```

You should now get a `llama_stack.exceptions.UnsupportedModelError` with
the supported list of models in the error message.

---

Tested for the following remote inference providers, and they all raise
the `UnsupportedModelError`:
- Anthropic
- Cerebras
- Fireworks
- Gemini
- Groq
- Ollama
- OpenAI
- SambaNova
- Together
- Watsonx

---------

Co-authored-by: Rohan Awhad <rawhad@redhat.com>
2025-06-27 14:26:58 -04:00
..
agents chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
batch_inference chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
benchmarks chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
common chore: standardize unsupported model error #2517 (#2518) 2025-06-27 14:26:58 -04:00
datasetio chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
datasets fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
eval chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
files fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
inference chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
inspect chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
models fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
post_training chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
providers chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
safety chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
scoring chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
scoring_functions chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
shields chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
synthetic_data_generation chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
telemetry chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
tools chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
vector_dbs chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
vector_io chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
resource.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30
version.py llama-stack version alpha -> v1 2025-01-15 05:58:09 -08:00