llama-stack-mirror/tests/unit/providers
Ben Browning fd9d52564b fix: resolve BuiltinTools to strings for vllm tool_call messages
When the result of a ToolCall gets passed back into vLLM for the model
to handle the tool call result (as is often the case in agentic
tool-calling workflows), we forgot to handle the case where
BuiltinTool calls are not string values but instead instances of the
BuiltinTool enum. This fixes that, properly converting those enums to
string values before trying to serialize them into an OpenAI chat
completion request to vLLM.

PR #1931 fixed a bug where we weren't passing these tool calling
results back into vLLM, but as a side-effect it created this
serialization bug when using BuiltinTools.

Resolves #2070

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-30 20:10:33 -04:00
..
agents feat: make sure agent sessions are under access control (#1737) 2025-03-21 07:31:16 -07:00
inference fix: Including tool call in chat (#1931) 2025-04-24 16:59:10 -07:00
nvidia fix: Fix messages format in NVIDIA safety check request body (#2063) 2025-04-30 18:01:28 +02:00
utils fix: resolve BuiltinTools to strings for vllm tool_call messages 2025-04-30 20:10:33 -04:00
vector_io chore: Updating sqlite-vec to make non-blocking calls (#1762) 2025-03-23 17:25:44 -07:00
test_configs.py feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00