Update test_agents.py for Llama 4 models and remote-vllm

This updates test_agents.py a bit after testing with Llama 4 Scout and
the remote-vllm provider. The main difference here is a bit more
verbose prompting to encourage tool calls because Llama 4 Scout likes
to reply that polyjuice is fictional and has no boiling point vs
calling our custom tool unless it's prodded a bit.

Also, the remote-vllm distribution doesn't use input/output shields by
default so test_multi_tool_calls was adjusted to only expect the
shield results if shields are in use and otherwise not check for
shield usage.

Note that it requires changes to the vLLM pythonic tool parser to pass
these tests - those are listed at
https://gist.github.com/bbrowning/4734240ce96b4264340caa9584e47c9e

With this change, all of the agent tests pass with Llama 4 Scout and
remote-vllm except one of the RAG tests, that looks to be an
unrelated (and pre-existing) failure.

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
```

Signed-off-by: Ben Browning <bbrownin@redhat.com>
This commit is contained in:
Ben Browning 2025-05-14 10:41:30 -04:00
parent 9f2a7e6a74
commit b3493ee94f
2 changed files with 29 additions and 17 deletions

View file

@ -532,7 +532,7 @@ async def test_process_vllm_chat_completion_stream_response_tool_call_args_last_
yield chunk
chunks = [chunk async for chunk in _process_vllm_chat_completion_stream_response(mock_stream())]
assert len(chunks) == 2
assert len(chunks) == 3
assert chunks[-1].event.event_type == ChatCompletionResponseEventType.complete
assert chunks[-2].event.delta.type == "tool_call"
assert chunks[-2].event.delta.tool_call.tool_name == mock_tool_name
@ -585,7 +585,7 @@ async def test_process_vllm_chat_completion_stream_response_no_finish_reason():
yield chunk
chunks = [chunk async for chunk in _process_vllm_chat_completion_stream_response(mock_stream())]
assert len(chunks) == 2
assert len(chunks) == 3
assert chunks[-1].event.event_type == ChatCompletionResponseEventType.complete
assert chunks[-2].event.delta.type == "tool_call"
assert chunks[-2].event.delta.tool_call.tool_name == mock_tool_name
@ -634,7 +634,7 @@ async def test_process_vllm_chat_completion_stream_response_tool_without_args():
yield chunk
chunks = [chunk async for chunk in _process_vllm_chat_completion_stream_response(mock_stream())]
assert len(chunks) == 2
assert len(chunks) == 3
assert chunks[-1].event.event_type == ChatCompletionResponseEventType.complete
assert chunks[-2].event.delta.type == "tool_call"
assert chunks[-2].event.delta.tool_call.tool_name == mock_tool_name