llama-stack-mirror/tests/unit/providers
Ben Browning 9f2a7e6a74 fix: multiple tool calls in remote-vllm chat_completion
This fixes an issue in how we used the tool_call_buf from streaming
tool calls in the remote-vllm provider where it would end up
concatenating parameters from multiple different tool call results
instead of aggregating the results from each tool call separately.

It also fixes an issue found while digging into that where we were
accidentally mixing the json string form of tool call parameters with
the string representation of the python form, which mean we'd end up
with single quotes in what should be double-quoted json strings.

The following tests are now passing 100% for the remote-vllm provider,
where some of the test_text_inference were failing before this change:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_text_inference.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_vision_inference.py --vision-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

```

Many of the agent tests are passing, although some are failing due to
bugs in vLLM's pythonic tool parser for Llama models. See the PR at
https://github.com/vllm-project/vllm/pull/17917 and a gist at
https://gist.github.com/bbrowning/b5007709015cb2aabd85e0bd08e6d60f for
changes needed there, which will have to get made upstream in vLLM.

Agent tests:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
````

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-14 20:58:57 -04:00
..
agent feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
agents feat: function tools in OpenAI Responses (#2094) 2025-05-13 11:29:15 -07:00
inference fix: multiple tool calls in remote-vllm chat_completion 2025-05-14 20:58:57 -04:00
nvidia fix: Fix messages format in NVIDIA safety check request body (#2063) 2025-04-30 18:01:28 +02:00
utils fix: add check for interleavedContent (#1973) 2025-05-06 09:55:07 -07:00
vector_io chore: Updating sqlite-vec to make non-blocking calls (#1762) 2025-03-23 17:25:44 -07:00
test_configs.py feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00