llama-stack-mirror/llama_stack/providers/tests
Ben Browning dd1a366347
fix: logprobs support in remote-vllm provider (#1074)
# What does this PR do?

The remote-vllm provider was not passing logprobs options from
CompletionRequest or ChatCompletionRequests through to the OpenAI client
parameters. I manually verified this, as well as observed this provider
failing `TestInference::test_completion_logprobs`. This was filed as
issue #1073.

This fixes that by passing the `logprobs.top_k` value through to the
parameters we pass into the OpenAI client.

Additionally, this fixes a bug in `test_text_inference.py` where it
mistakenly assumed chunk.delta were of type `ContentDelta` for
completion requests. The deltas are of type `ContentDelta` for chat
completion requests, but for basic completion requests the deltas are of
type string. This test was likely failing for other providers that did
properly support logprobs because of this latter issue in the test,
which was hit while fixing the above issue with the remote-vllm
provider.

(Closes #1073)

## Test Plan

First, you need a vllm running. I ran one locally like this:
```
vllm serve meta-llama/Llama-3.2-3B-Instruct --port 8001 --enable-auto-tool-choice --tool-call-parser llama3_json
```

Next, run test_text_inference.py against this vllm using the remote vllm
provider like this:
```
VLLM_URL="http://localhost:8001/v1" python -m pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py --providers "inference=vllm_remote"
```

Before my change, the test failed with this error:
```
llama_stack/providers/tests/inference/test_text_inference.py:155: in test_completion_logprobs
    assert 1 <= len(response.logprobs) <= 5
E   TypeError: object of type 'NoneType' has no len()
```

After my change, the test passes.

[//]: # (## Documentation)

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-02-13 11:00:00 -05:00
..
agents docs, tests: replace datasets.rst with memory_optimizations.rst (#968) 2025-02-05 11:25:56 -05:00
datasetio test: rm unused exception alias in pytest.raises (#991) 2025-02-07 08:04:25 -08:00
eval test: replace memory with vector_io fixture (#984) 2025-02-06 10:12:59 -08:00
inference fix: logprobs support in remote-vllm provider (#1074) 2025-02-13 11:00:00 -05:00
post_training Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
safety Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
scoring Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
tools docs, tests: replace datasets.rst with memory_optimizations.rst (#968) 2025-02-05 11:25:56 -05:00
vector_io feat: Adding sqlite-vec as a vectordb (#1040) 2025-02-12 10:50:03 -08:00
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
ci_test_config.yaml [test automation] support run tests on config file (#730) 2025-01-16 12:05:49 -08:00
conftest.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
env.py Significantly simpler and malleable test setup (#360) 2024-11-04 17:36:43 -08:00
README.md [test automation] support run tests on config file (#730) 2025-01-16 12:05:49 -08:00
report.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
resolver.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00

Testing Llama Stack Providers

The Llama Stack is designed as a collection of Lego blocks -- various APIs -- which are composable and can be used to quickly and reliably build an app. We need a testing setup which is relatively flexible to enable easy combinations of these providers.

We use pytest and all of its dynamism to enable the features needed. Specifically:

  • We use pytest_addoption to add CLI options allowing you to override providers, models, etc.

  • We use pytest_generate_tests to dynamically parametrize our tests. This allows us to support a default set of (providers, models, etc.) combinations but retain the flexibility to override them via the CLI if needed.

  • We use pytest_configure to make sure we dynamically add appropriate marks based on the fixtures we make.

  • We use pytest_collection_modifyitems to filter tests based on the test config (if specified).

Common options

All tests support a --providers option which can be a string of the form api1=provider_fixture1,api2=provider_fixture2. So, when testing safety (which need inference and safety APIs) you can use --providers inference=together,safety=meta_reference to use these fixtures in concert.

Depending on the API, there are custom options enabled. For example, inference tests allow for an --inference-model override, etc.

By default, we disable warnings and enable short tracebacks. You can override them using pytest's flags as appropriate.

Some providers need special API keys or other configuration options to work. You can check out the individual fixtures (located in tests/<api>/fixtures.py) for what these keys are. These can be specified using the --env CLI option. You can also have it be present in the environment (exporting in your shell) or put it in the .env file in the directory from which you run the test. For example, to use the Together fixture you can use --env TOGETHER_API_KEY=<...>

Inference

We have the following orthogonal parametrizations (pytest "marks") for inference tests:

  • providers: (meta_reference, together, fireworks, ollama)
  • models: (llama_8b, llama_3b)

If you want to run a test with the llama_8b model with fireworks, you can use:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m "fireworks and llama_8b" \
  --env FIREWORKS_API_KEY=<...>

You can make it more complex to run both llama_8b and llama_3b on Fireworks, but only llama_3b with Ollama:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m "fireworks or (ollama and llama_3b)" \
  --env FIREWORKS_API_KEY=<...>

Finally, you can override the model completely by doing:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m fireworks \
  --inference-model "meta-llama/Llama3.1-70B-Instruct" \
  --env FIREWORKS_API_KEY=<...>

Agents

The Agents API composes three other APIs underneath:

  • Inference
  • Safety
  • Memory

Given that each of these has several fixtures each, the set of combinations is large. We provide a default set of combinations (see tests/agents/conftest.py) with easy to use "marks":

  • meta_reference -- uses all the meta_reference fixtures for the dependent APIs
  • together -- uses Together for inference, and meta_reference for the rest
  • ollama -- uses Ollama for inference, and meta_reference for the rest

An example test with Together:

pytest -s -m together llama_stack/providers/tests/agents/test_agents.py  \
 --env TOGETHER_API_KEY=<...>

If you want to override the inference model or safety model used, you can use the --inference-model or --safety-shield CLI options as appropriate.

If you wanted to test a remotely hosted stack, you can use -m remote as follows:

pytest -s -m remote llama_stack/providers/tests/agents/test_agents.py \
  --env REMOTE_STACK_URL=<...>

Test Config

If you want to run a test suite with a custom set of tests and parametrizations, you can define a YAML test config under llama_stack/providers/tests/ folder and pass the filename through --config option as follows:

pytest llama_stack/providers/tests/ --config=ci_test_config.yaml

Test config format

Currently, we support test config on inference, agents and memory api tests.

Example format of test config can be found in ci_test_config.yaml.