llama-stack-mirror/src/llama_stack/providers/remote
Wojciech-Rebisz 07c28cd519
fix: Avoid model_limits KeyError (#4060)
# What does this PR do?
It avoids model_limit KeyError while trying to get embedding models for
Watsonx

<!-- If resolving an issue, uncomment and update the line below -->
Closes https://github.com/llamastack/llama-stack/issues/4059

## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->
Start server with watsonx distro:
```bash
llama stack list-deps watsonx | xargs -L1 uv pip install
uv run llama stack run watsonx
```
Run 
```python
client = LlamaStackClient(base_url=base_url)
client.models.list()
```
Check if there is any embedding model available (currently there is not
a single one)
2025-11-05 10:34:40 -08:00
..
agents chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
datasetio chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
eval chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
files feat: openai files provider (#3946) 2025-10-28 16:25:03 -07:00
inference fix: Avoid model_limits KeyError (#4060) 2025-11-05 10:34:40 -08:00
post_training chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
safety chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
tool_runtime chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00
vector_io chore!: BREAKING CHANGE: vector_db_id -> vector_store_id (#3923) 2025-10-27 14:26:06 -07:00
__init__.py chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00