llama-stack-mirror/llama_stack/providers/remote/inference/watsonx
mergify[bot] 0899f78943
fix: Avoid model_limits KeyError (backport #4060) (#4283)
# What does this PR do?
It avoids model_limit KeyError while trying to get embedding models for
Watsonx


Closes https://github.com/llamastack/llama-stack/issues/4059

## Test Plan

Start server with watsonx distro:
```bash
llama stack list-deps watsonx | xargs -L1 uv pip install
uv run llama stack run watsonx
```
Run 
```python
client = LlamaStackClient(base_url=base_url)
client.models.list()
```
Check if there is any embedding model available (currently there is not
a single one)<hr>This is an automatic backport of pull request #4060
done by [Mergify](https://mergify.com).

Co-authored-by: Wojciech-Rebisz <147821486+Wojciech-Rebisz@users.noreply.github.com>
2025-12-03 10:56:24 +01:00
..
__init__.py fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
config.py fix: Fixed WatsonX remote inference provider (#3801) 2025-10-14 14:52:32 +02:00
watsonx.py fix: Avoid model_limits KeyError (backport #4060) (#4283) 2025-12-03 10:56:24 +01:00