add pytest option to generate a functional report for distribution (#833)

# What does this PR do?

add pytest option (`--report`) to support generating a functional report
for llama stack distribution

## Test Plan
```
export LLAMA_STACK_CONFIG=./llama_stack/templates/fireworks/run.yaml
/opt/miniconda3/envs/stack/bin/pytest -s -v tests/client-sdk/  --report
```

See a report file was generated under
`./llama_stack/templates/fireworks/report.md`


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
This commit is contained in:
Sixian Yi 2025-01-21 21:18:23 -08:00 committed by GitHub
parent e41873f268
commit edf56884a7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 324 additions and 6 deletions

View file

@ -82,7 +82,7 @@ def base64_image_url():
return base64_url
def test_completion_non_streaming(llama_stack_client, text_model_id):
def test_text_completion_non_streaming(llama_stack_client, text_model_id):
response = llama_stack_client.inference.completion(
content="Complete the sentence using one word: Roses are red, violets are ",
stream=False,
@ -94,7 +94,7 @@ def test_completion_non_streaming(llama_stack_client, text_model_id):
assert "blue" in response.content.lower().strip()
def test_completion_streaming(llama_stack_client, text_model_id):
def test_text_completion_streaming(llama_stack_client, text_model_id):
response = llama_stack_client.inference.completion(
content="Complete the sentence using one word: Roses are red, violets are ",
stream=True,
@ -147,7 +147,7 @@ def test_completion_log_probs_streaming(llama_stack_client, text_model_id):
assert not chunk.logprobs, "Logprobs should be empty"
def test_completion_structured_output(
def test_text_completion_structured_output(
llama_stack_client, text_model_id, inference_provider_type
):
user_input = """