llama-stack/tests/client-sdk
Ashwin Bharambe 23b65b6cee
fix(test): update client-sdk tests to handle tool format parametrization better (#1287)
# What does this PR do?

Tool format depends on the model. @ehhuang introduced a
`get_default_tool_prompt_format` function for this purpose. We should
use that instead of hacky model ID matching we had before.

Secondly, non llama models don't have this concept so testing with those
models should work as is.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

```bash
for distro in fireworks ollama; do
  LLAMA_STACK_CONFIG=$distro \
    pytest -s -v tests/client-sdk/inference/test_text_inference.py \
       --inference-model=meta-llama/Llama-3.2-3B-Instruct \
       --vision-inference-model=""
done

LLAMA_STACK_CONFIG=dev \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
       --inference-model=openai/gpt-4o \
       --vision-inference-model=""

```

[//]: # (## Documentation)
2025-02-26 21:16:00 -08:00
..
agents feat: allow specifying specific tool within toolgroup (#1239) 2025-02-26 14:07:05 -08:00
inference fix(test): update client-sdk tests to handle tool format parametrization better (#1287) 2025-02-26 21:16:00 -08:00
safety Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
tool_runtime build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
vector_io Fix test infra, sentence embeddings mixin 2025-02-21 15:11:46 -08:00
__init__.py [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
conftest.py feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
metadata.py Report generation minor fixes (#884) 2025-01-28 04:58:12 -08:00
README.md Update README.md 2025-02-14 15:45:08 -08:00
report.py script for running client sdk tests (#895) 2025-02-19 22:38:06 -08:00

Llama Stack Integration Tests

You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint.

To test on a Llama Stack library with certain configuration, run

LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml pytest -s -v tests/client-sdk/inference/

or just the template name

LLAMA_STACK_CONFIG=together pytest -s -v tests/client-sdk/inference/

To test on a Llama Stack endpoint, run

LLAMA_STACK_BASE_URL=http://localhost:8089 pytest -s -v tests/client-sdk/inference

Report Generation

To generate a report, run with --report option

LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/client-sdk/ --report

Common options

Depending on the API, there are custom options enabled

  • For tests in inference/ and agents/, we support --inference-model(to be used in text inference tests) and--vision-inference-model` (only used in image inference tests) overrides
  • For tests in vector_io/, we support --embedding-model override
  • For tests in safety/, we support --safety-shield override
  • The param can be --report or --report <path> If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required.