llama-stack-mirror/llama_stack/apis/common
Rohan Awhad 7cb5d3c60f
chore: standardize unsupported model error #2517 (#2518)
# What does this PR do?

- llama_stack/exceptions.py: Add UnsupportedModelError class
- remote inference ollama.py and utils/inference/model_registry.py:
Changed ValueError in favor of UnsupportedModelError
- utils/inference/litellm_openai_mixin.py: remove `register_model`
function implementation from `LiteLLMOpenAIMixin` class. Now uses the
parent class `ModelRegistryHelper`'s function implementation

Closes #2517


## Test Plan


1. Create a new `test_run_openai.yaml` and paste the following config in
it:

```yaml
version: '2'
image_name: test-image
apis:
- inference
providers:
  inference:
  - provider_id: openai
    provider_type: remote::openai
    config:
      max_tokens: 8192
models:
- metadata: {}
  model_id: "non-existent-model"
  provider_id: openai
  model_type: llm
server:
  port: 8321
```

And run the server with:
```bash
uv run llama stack run test_run_openai.yaml
```

You should now get a `llama_stack.exceptions.UnsupportedModelError` with
the supported list of models in the error message.

---

Tested for the following remote inference providers, and they all raise
the `UnsupportedModelError`:
- Anthropic
- Cerebras
- Fireworks
- Gemini
- Groq
- Ollama
- OpenAI
- SambaNova
- Together
- Watsonx

---------

Co-authored-by: Rohan Awhad <rawhad@redhat.com>
2025-06-27 14:26:58 -04:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
content_types.py chore: more mypy fixes (#2029) 2025-05-06 09:52:31 -07:00
errors.py chore: standardize unsupported model error #2517 (#2518) 2025-06-27 14:26:58 -04:00
job_types.py feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
responses.py feat: Add url field to PaginatedResponse and populate it using route … (#2419) 2025-06-16 11:19:48 +02:00
training_types.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
type_system.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00