llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Ilya Kolchinsky 5052c3cbf3
fix: Fixed an "out of token budget" error when attempting a tool call via remote vLLM provider (#2114)
# What does this PR do?
Closes #2113.
Closes #1783.

Fixes a bug in handling the end of tool execution request stream where
no `finish_reason` is provided by the model.

## Test Plan
1. Ran existing unit tests
2. Added a dedicated test verifying correct behavior in this edge case
3. Ran the code snapshot from #2113

[//]: # (## Documentation)
2025-05-14 13:11:02 -07:00
..
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
vllm.py fix: Fixed an "out of token budget" error when attempting a tool call via remote vLLM provider (#2114) 2025-05-14 13:11:02 -07:00