llama-stack/tests/client-sdk
Sixian Yi ba453c3487
Report generation minor fixes (#884)
# What does this PR do?

fixed report generation:
1) do not initialize a new client in report.py - instead get it from
pytest fixture
2) Add "provider" for "safety" and "agents" section
3) add logprobs functionality in "inference" section


## Test Plan

See the regenerated report 



## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-28 04:58:12 -08:00
..
agents make default tool prompt format none in agent config (#863) 2025-01-23 14:44:59 -08:00
inference Fix meta-reference GPU implementation for inference 2025-01-22 18:31:59 -08:00
safety nitpick 2025-01-22 18:09:46 -08:00
tool_runtime Update OpenAPI generator to output discriminator (#848) 2025-01-22 22:15:23 -08:00
vector_io add distro report (#847) 2025-01-22 19:20:49 -08:00
__init__.py [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
conftest.py Fix report generation for url endpoints (#876) 2025-01-24 13:15:44 -08:00
metadata.py Report generation minor fixes (#884) 2025-01-28 04:58:12 -08:00
README.md Fix report generation for url endpoints (#876) 2025-01-24 13:15:44 -08:00
report.py Report generation minor fixes (#884) 2025-01-28 04:58:12 -08:00

Llama Stack Integration Tests

You can run llama stack integration tests on either a Llama Stack Library or a Llama Stack endpoint.

To test on a Llama Stack library with certain configuration, run

LLAMA_STACK_CONFIG=./llama_stack/templates/cerebras/run.yaml
pytest -s -v tests/client-sdk/inference/test_inference.py

or just the template name

LLAMA_STACK_CONFIG=together
pytest -s -v tests/client-sdk/inference/test_inference.py

To test on a Llama Stack endpoint, run

LLAMA_STACK_BASE_URL=http//localhost:8089
pytest -s -v tests/client-sdk/inference/test_inference.py

Report Generation

To generate a report, run with --report option

LLAMA_STACK_CONFIG=together pytest -s -v report.md tests/client-sdk/ --report

Common options

Depending on the API, there are custom options enabled

  • For tests in inference/ and agents/, we support --inference-model(to be used in text inference tests) and--vision-inference-model` (only used in image inference tests) overrides
  • For tests in vector_io/, we support --embedding-model override
  • For tests in safety/, we support --safety-shield override
  • The param can be --report or --report <path> If path is not provided, we do a best effort to infer based on the config / template name. For url endpoints, path is required.