llama-stack-mirror/llama_stack
Ben Browning fd9d52564b fix: resolve BuiltinTools to strings for vllm tool_call messages
When the result of a ToolCall gets passed back into vLLM for the model
to handle the tool call result (as is often the case in agentic
tool-calling workflows), we forgot to handle the case where
BuiltinTool calls are not string values but instead instances of the
BuiltinTool enum. This fixes that, properly converting those enums to
string values before trying to serialize them into an OpenAI chat
completion request to vLLM.

PR #1931 fixed a bug where we weren't passing these tool calling
results back into vLLM, but as a side-effect it created this
serialization bug when using BuiltinTools.

Resolves #2070

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-30 20:10:33 -04:00
..
apis feat: OpenAI Responses API (#1989) 2025-04-28 14:06:00 -07:00
cli feat: add additional logging to llama stack build (#1689) 2025-04-30 11:06:24 -07:00
distribution feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
models feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
providers fix: resolve BuiltinTools to strings for vllm tool_call messages 2025-04-30 20:10:33 -04:00
strong_typing feat: OpenAI Responses API (#1989) 2025-04-28 14:06:00 -07:00
templates chore: Remove zero-width space characters from OTEL service name env var defaults (#2060) 2025-04-30 17:56:46 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00