When the result of a ToolCall gets passed back into vLLM for the model
to handle the tool call result (as is often the case in agentic
tool-calling workflows), we forgot to handle the case where
BuiltinTool calls are not string values but instead instances of the
BuiltinTool enum. This fixes that, properly converting those enums to
string values before trying to serialize them into an OpenAI chat
completion request to vLLM.
PR #1931 fixed a bug where we weren't passing these tool calling
results back into vLLM, but as a side-effect it created this
serialization bug when using BuiltinTools.
Resolves#2070
Signed-off-by: Ben Browning <bbrownin@redhat.com>
Include the tool call details with the chat when doing Rag with Remote
vllm
Fixes: #1929
With this PR the tool call is included in the chat returned to vllm, the
model (meta-llama/Llama-3.1-8B-Instruct) the returns the answer as
expected.
Signed-off-by: Derek Higgins <derekh@redhat.com>