This updates test_agents.py a bit after testing with Llama 4 Scout and the remote-vllm provider. The main difference here is a bit more verbose prompting to encourage tool calls because Llama 4 Scout likes to reply that polyjuice is fictional and has no boiling point vs calling our custom tool unless it's prodded a bit. Also, the remote-vllm distribution doesn't use input/output shields by default so test_multi_tool_calls was adjusted to only expect the shield results if shields are in use and otherwise not check for shield usage. Note that it requires changes to the vLLM pythonic tool parser to pass these tests - those are listed at https://gist.github.com/bbrowning/4734240ce96b4264340caa9584e47c9e With this change, all of the agent tests pass with Llama 4 Scout and remote-vllm except one of the RAG tests, that looks to be an unrelated (and pre-existing) failure. ``` VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" ``` Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| agents | ||
| datasets | ||
| eval | ||
| fixtures | ||
| inference | ||
| inspect | ||
| post_training | ||
| providers | ||
| safety | ||
| scoring | ||
| telemetry | ||
| test_cases | ||
| tool_runtime | ||
| tools | ||
| vector_io | ||
| __init__.py | ||
| conftest.py | ||
| README.md | ||
Llama Stack Integration Tests
We use pytest for parameterizing and running tests. You can see all options with:
cd tests/integration
# this will show a long list of options, look for "Custom options:"
pytest --help
Here are the most important options:
--stack-config: specify the stack config to use. You have three ways to point to a stack:- a URL which points to a Llama Stack distribution server
- a template (e.g.,
fireworks,together) or a path to arun.yamlfile - a comma-separated list of api=provider pairs, e.g.
inference=fireworks,safety=llama-guard,agents=meta-reference. This is most useful for testing a single API surface.
--env: set environment variables, e.g. --env KEY=value. this is a utility option to set environment variables required by various providers.
Model parameters can be influenced by the following options:
--text-model: comma-separated list of text models.--vision-model: comma-separated list of vision models.--embedding-model: comma-separated list of embedding models.--safety-shield: comma-separated list of safety shields.--judge-model: comma-separated list of judge models.--embedding-dimension: output dimensionality of the embedding model to use for testing. Default: 384
Each of these are comma-separated lists and can be used to generate multiple parameter combinations. Note that tests will be skipped if no model is specified.
Experimental, under development, options:
--record-responses: record new API responses instead of using cached ones
Examples
Run all text inference tests with the together distribution:
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=together \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Run all text inference tests with the together distribution and meta-llama/Llama-3.1-8B-Instruct:
pytest -s -v tests/integration/inference/test_text_inference.py \
--stack-config=together \
--text-model=meta-llama/Llama-3.1-8B-Instruct
Running all inference tests for a number of models:
TEXT_MODELS=meta-llama/Llama-3.1-8B-Instruct,meta-llama/Llama-3.1-70B-Instruct
VISION_MODELS=meta-llama/Llama-3.2-11B-Vision-Instruct
EMBEDDING_MODELS=all-MiniLM-L6-v2
export TOGETHER_API_KEY=<together_api_key>
pytest -s -v tests/integration/inference/ \
--stack-config=together \
--text-model=$TEXT_MODELS \
--vision-model=$VISION_MODELS \
--embedding-model=$EMBEDDING_MODELS
Same thing but instead of using the distribution, use an adhoc stack with just one provider (fireworks for inference):
export FIREWORKS_API_KEY=<fireworks_api_key>
pytest -s -v tests/integration/inference/ \
--stack-config=inference=fireworks \
--text-model=$TEXT_MODELS \
--vision-model=$VISION_MODELS \
--embedding-model=$EMBEDDING_MODELS
Running Vector IO tests for a number of embedding models:
EMBEDDING_MODELS=all-MiniLM-L6-v2
pytest -s -v tests/integration/vector_io/ \
--stack-config=inference=sentence-transformers,vector_io=sqlite-vec \
--embedding-model=$EMBEDDING_MODELS