# What does this PR do?
- llama_stack/exceptions.py: Add UnsupportedModelError class
- remote inference ollama.py and utils/inference/model_registry.py:
Changed ValueError in favor of UnsupportedModelError
- utils/inference/litellm_openai_mixin.py: remove `register_model`
function implementation from `LiteLLMOpenAIMixin` class. Now uses the
parent class `ModelRegistryHelper`'s function implementation
Closes#2517
## Test Plan
1. Create a new `test_run_openai.yaml` and paste the following config in
it:
```yaml
version: '2'
image_name: test-image
apis:
- inference
providers:
inference:
- provider_id: openai
provider_type: remote::openai
config:
max_tokens: 8192
models:
- metadata: {}
model_id: "non-existent-model"
provider_id: openai
model_type: llm
server:
port: 8321
```
And run the server with:
```bash
uv run llama stack run test_run_openai.yaml
```
You should now get a `llama_stack.exceptions.UnsupportedModelError` with
the supported list of models in the error message.
---
Tested for the following remote inference providers, and they all raise
the `UnsupportedModelError`:
- Anthropic
- Cerebras
- Fireworks
- Gemini
- Groq
- Ollama
- OpenAI
- SambaNova
- Together
- Watsonx
---------
Co-authored-by: Rohan Awhad <rawhad@redhat.com>