llama-stack/tests/client-sdk
Matthew Farrellee 2f11c7c203
add test for user message w/ image.data content (#906)
# What does this PR do?

a test exists for image.url content, but not image.data content. this
adds the former.


## Test Plan

`LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v
tests/client-sdk/inference/test_inference.py`


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
2025-01-30 17:35:27 -08:00
..
agents Fix Agents to support code and rag simultaneously (#908) 2025-01-30 17:09:34 -08:00
inference add test for user message w/ image.data content (#906) 2025-01-30 17:35:27 -08:00
safety nitpick 2025-01-22 18:09:46 -08:00
tool_runtime Update OpenAPI generator to output discriminator (#848) 2025-01-22 22:15:23 -08:00
vector_io add distro report (#847) 2025-01-22 19:20:49 -08:00
__init__.py [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
conftest.py Fix report generation for url endpoints (#876) 2025-01-24 13:15:44 -08:00
metadata.py Report generation minor fixes (#884) 2025-01-28 04:58:12 -08:00
README.md Fix report generation for url endpoints (#876) 2025-01-24 13:15:44 -08:00
report.py Report generation minor fixes (#884) 2025-01-28 04:58:12 -08:00

Llama Stack Integration Tests

You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint.

To test on a Llama Stack library with certain configuration, run

LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml
pytest -s -v tests/client-sdk/inference/test_inference.py

or just the template name

LLAMA_STACK_CONFIG=together
pytest -s -v tests/client-sdk/inference/test_inference.py

To test on a Llama Stack endpoint, run

LLAMA_STACK_BASE_URL=http//localhost:8089
pytest -s -v tests/client-sdk/inference/test_inference.py

Report Generation

To generate a report, run with --report option

LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/client-sdk/ --report

Common options

Depending on the API, there are custom options enabled

  • For tests in inference/ and agents/, we support --inference-model(to be used in text inference tests) and--vision-inference-model` (only used in image inference tests) overrides
  • For tests in vector_io/, we support --embedding-model override
  • For tests in safety/, we support --safety-shield override
  • The param can be --report or --report <path> If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required.