llama-stack-mirror/llama_stack/providers/remote
Ilya Kolchinsky 5052c3cbf3
fix: Fixed an "out of token budget" error when attempting a tool call via remote vLLM provider (#2114)
# What does this PR do?
Closes #2113.
Closes #1783.

Fixes a bug in handling the end of tool execution request stream where
no `finish_reason` is provided by the model.

## Test Plan
1. Ran existing unit tests
2. Added a dedicated test verifying correct behavior in this edge case
3. Ran the code snapshot from #2113

[//]: # (## Documentation)
2025-05-14 13:11:02 -07:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
eval chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
inference fix: Fixed an "out of token budget" error when attempting a tool call via remote vLLM provider (#2114) 2025-05-14 13:11:02 -07:00
post_training chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
safety chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
tool_runtime chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
vector_io fix: chromadb type hint (#2136) 2025-05-12 06:27:01 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00