llama-stack-mirror/llama_stack
Ilya Kolchinsky 5052c3cbf3
fix: Fixed an "out of token budget" error when attempting a tool call via remote vLLM provider (#2114)
# What does this PR do?
Closes #2113.
Closes #1783.

Fixes a bug in handling the end of tool execution request stream where
no `finish_reason` is provided by the model.

## Test Plan
1. Ran existing unit tests
2. Added a dedicated test verifying correct behavior in this edge case
3. Ran the code snapshot from #2113

[//]: # (## Documentation)
2025-05-14 13:11:02 -07:00
..
apis feat: function tools in OpenAI Responses (#2094) 2025-05-13 11:29:15 -07:00
cli chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
distribution feat: function tools in OpenAI Responses (#2094) 2025-05-13 11:29:15 -07:00
models fix: llama4 tool use prompt fix (#2103) 2025-05-06 22:18:31 -07:00
providers fix: Fixed an "out of token budget" error when attempting a tool call via remote vLLM provider (#2114) 2025-05-14 13:11:02 -07:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates chore: remove pytest reports (#2156) 2025-05-13 22:40:15 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00