llama-stack-mirror/tests/unit/providers/inference
Ben Browning b3493ee94f Update test_agents.py for Llama 4 models and remote-vllm
This updates test_agents.py a bit after testing with Llama 4 Scout and
the remote-vllm provider. The main difference here is a bit more
verbose prompting to encourage tool calls because Llama 4 Scout likes
to reply that polyjuice is fictional and has no boiling point vs
calling our custom tool unless it's prodded a bit.

Also, the remote-vllm distribution doesn't use input/output shields by
default so test_multi_tool_calls was adjusted to only expect the
shield results if shields are in use and otherwise not check for
shield usage.

Note that it requires changes to the vLLM pythonic tool parser to pass
these tests - those are listed at
https://gist.github.com/bbrowning/4734240ce96b4264340caa9584e47c9e

With this change, all of the agent tests pass with Llama 4 Scout and
remote-vllm except one of the RAG tests, that looks to be an
unrelated (and pre-existing) failure.

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
```

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-14 20:58:57 -04:00
..
test_remote_vllm.py Update test_agents.py for Llama 4 models and remote-vllm 2025-05-14 20:58:57 -04:00