llama-stack-mirror/llama_stack/providers/remote
mergify[bot] 0899f78943
fix: Avoid model_limits KeyError (backport #4060) (#4283)
# What does this PR do?
It avoids model_limit KeyError while trying to get embedding models for
Watsonx


Closes https://github.com/llamastack/llama-stack/issues/4059

## Test Plan

Start server with watsonx distro:
```bash
llama stack list-deps watsonx | xargs -L1 uv pip install
uv run llama stack run watsonx
```
Run 
```python
client = LlamaStackClient(base_url=base_url)
client.models.list()
```
Check if there is any embedding model available (currently there is not
a single one)<hr>This is an automatic backport of pull request #4060
done by [Mergify](https://mergify.com).

Co-authored-by: Wojciech-Rebisz <147821486+Wojciech-Rebisz@users.noreply.github.com>
2025-12-03 10:56:24 +01:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio chore: remove build.py (#3869) 2025-10-20 16:28:15 -07:00
eval feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin (#3547) 2025-09-25 17:17:00 -04:00
files/s3 feat(stores)!: use backend storage references instead of configs (#3697) 2025-10-20 13:20:09 -07:00
inference fix: Avoid model_limits KeyError (backport #4060) (#4283) 2025-12-03 10:56:24 +01:00
post_training chore: remove build.py (#3869) 2025-10-20 16:28:15 -07:00
safety chore: remove build.py (#3869) 2025-10-20 16:28:15 -07:00
tool_runtime feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
vector_io fix: Vector store persistence across server restarts (backport #3977) (#4225) 2025-11-24 11:30:21 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00