mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 23:31:59 +00:00
When the result of a ToolCall gets passed back into vLLM for the model to handle the tool call result (as is often the case in agentic tool-calling workflows), we forgot to handle the case where BuiltinTool calls are not string values but instead instances of the BuiltinTool enum. This fixes that, properly converting those enums to string values before trying to serialize them into an OpenAI chat completion request to vLLM. PR #1931 fixed a bug where we weren't passing these tool calling results back into vLLM, but as a side-effect it created this serialization bug when using BuiltinTools. Resolves #2070 Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| agents | ||
| inference | ||
| nvidia | ||
| utils | ||
| vector_io | ||
| test_configs.py | ||