llama-stack/llama_stack/providers/tests
Hardik Shah a51c8b4efc
Convert SamplingParams.strategy to a union (#767)
# What does this PR do?

Cleans up how we provide sampling params. Earlier, strategy was an enum
and all params (top_p, temperature, top_k) across all strategies were
grouped. We now have a strategy union object with each strategy (greedy,
top_p, top_k) having its corresponding params.
Earlier, 
```
class SamplingParams: 
    strategy: enum ()
    top_p, temperature, top_k and other params
```
However, the `strategy` field was not being used in any providers making
it confusing to know the exact sampling behavior purely based on the
params since you could pass temperature, top_p, top_k and how the
provider would interpret those would not be clear.

Hence we introduced -- a union where the strategy and relevant params
are all clubbed together to avoid this confusion.

Have updated all providers, tests, notebooks, readme and otehr places
where sampling params was being used to use the new format.
   

## Test Plan
`pytest llama_stack/providers/tests/inference/groq/test_groq_utils.py`
// inference on ollama, fireworks and together 
`with-proxy pytest -v -s -k "ollama"
--inference-model="meta-llama/Llama-3.1-8B-Instruct"
llama_stack/providers/tests/inference/test_text_inference.py `
// agents on fireworks 
`pytest -v -s -k 'fireworks and create_agent'
--inference-model="meta-llama/Llama-3.1-8B-Instruct"
llama_stack/providers/tests/agents/test_agents.py
--safety-shield="meta-llama/Llama-Guard-3-8B"`

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [X] Ran pre-commit to handle lint / formatting issues.
- [X] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [X] Updated relevant documentation.
- [X] Wrote necessary unit or integration tests.

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-15 05:38:51 -08:00
..
agents Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
datasetio [rag evals] refactor & add ability to eval retrieval + generation in agentic eval pipeline (#664) 2025-01-02 11:21:33 -08:00
eval [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
inference Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
memory agents to use tools api (#673) 2025-01-08 19:01:00 -08:00
post_training [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
safety [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
scoring [rag evals] refactor & add ability to eval retrieval + generation in agentic eval pipeline (#664) 2025-01-02 11:21:33 -08:00
tools agents to use tools api (#673) 2025-01-08 19:01:00 -08:00
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py agents to use tools api (#673) 2025-01-08 19:01:00 -08:00
env.py Significantly simpler and malleable test setup (#360) 2024-11-04 17:36:43 -08:00
README.md update tests --inference-model to hf id 2024-11-18 17:36:58 -08:00
resolver.py Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data (#735) 2025-01-09 11:51:36 -08:00

Testing Llama Stack Providers

The Llama Stack is designed as a collection of Lego blocks -- various APIs -- which are composable and can be used to quickly and reliably build an app. We need a testing setup which is relatively flexible to enable easy combinations of these providers.

We use pytest and all of its dynamism to enable the features needed. Specifically:

  • We use pytest_addoption to add CLI options allowing you to override providers, models, etc.

  • We use pytest_generate_tests to dynamically parametrize our tests. This allows us to support a default set of (providers, models, etc.) combinations but retain the flexibility to override them via the CLI if needed.

  • We use pytest_configure to make sure we dynamically add appropriate marks based on the fixtures we make.

Common options

All tests support a --providers option which can be a string of the form api1=provider_fixture1,api2=provider_fixture2. So, when testing safety (which need inference and safety APIs) you can use --providers inference=together,safety=meta_reference to use these fixtures in concert.

Depending on the API, there are custom options enabled. For example, inference tests allow for an --inference-model override, etc.

By default, we disable warnings and enable short tracebacks. You can override them using pytest's flags as appropriate.

Some providers need special API keys or other configuration options to work. You can check out the individual fixtures (located in tests/<api>/fixtures.py) for what these keys are. These can be specified using the --env CLI option. You can also have it be present in the environment (exporting in your shell) or put it in the .env file in the directory from which you run the test. For example, to use the Together fixture you can use --env TOGETHER_API_KEY=<...>

Inference

We have the following orthogonal parametrizations (pytest "marks") for inference tests:

  • providers: (meta_reference, together, fireworks, ollama)
  • models: (llama_8b, llama_3b)

If you want to run a test with the llama_8b model with fireworks, you can use:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m "fireworks and llama_8b" \
  --env FIREWORKS_API_KEY=<...>

You can make it more complex to run both llama_8b and llama_3b on Fireworks, but only llama_3b with Ollama:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m "fireworks or (ollama and llama_3b)" \
  --env FIREWORKS_API_KEY=<...>

Finally, you can override the model completely by doing:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m fireworks \
  --inference-model "meta-llama/Llama3.1-70B-Instruct" \
  --env FIREWORKS_API_KEY=<...>

Agents

The Agents API composes three other APIs underneath:

  • Inference
  • Safety
  • Memory

Given that each of these has several fixtures each, the set of combinations is large. We provide a default set of combinations (see tests/agents/conftest.py) with easy to use "marks":

  • meta_reference -- uses all the meta_reference fixtures for the dependent APIs
  • together -- uses Together for inference, and meta_reference for the rest
  • ollama -- uses Ollama for inference, and meta_reference for the rest

An example test with Together:

pytest -s -m together llama_stack/providers/tests/agents/test_agents.py  \
 --env TOGETHER_API_KEY=<...>

If you want to override the inference model or safety model used, you can use the --inference-model or --safety-shield CLI options as appropriate.

If you wanted to test a remotely hosted stack, you can use -m remote as follows:

pytest -s -m remote llama_stack/providers/tests/agents/test_agents.py \
  --env REMOTE_STACK_URL=<...>