Commit graph

3 commits

Author SHA1 Message Date
Mustafa Elbehery
1790fc0f25
feat: Remove initialize() Method from LlamaStackAsLibrary (#2979)
# What does this PR do?
<!-- Provide a short summary of what this PR does and why. Link to
relevant issues if applicable. -->
This PR removes `init()` from `LlamaStackAsLibrary` 

Currently client.initialize() had to be invoked by user.
To improve dev experience and to avoid runtime errors, this PR init
LlamaStackAsLibrary implicitly upon using the client.
It prevents also multiple init of the same client, while maintaining
backward ccompatibility.

This PR does the following 

- Automatic Initialization: Constructor calls initialize_impl()
automatically.
-  Client is fully initialized after __init__ completes.
- Prevents consecutive initialization after the client has been
successfully initialized.
-  initialize() method still exists but is now a no-op.

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
fixes https://github.com/meta-llama/llama-stack/issues/2946

---------

Signed-off-by: Mustafa Elbehery <melbeher@redhat.com>
2025-08-21 15:59:04 -07:00
ashwinb
8ed69978f9
refactor(tests): make the responses tests nicer (#3161)
# What does this PR do?

A _bunch_ on cleanup for the Responses tests.

- Got rid of YAML test cases, moved them to just use simple pydantic models
- Splitting the large monolithic test file into multiple focused test files:
   - `test_basic_responses.py` for basic and image response tests
   - `test_tool_responses.py` for tool-related tests
   - `test_file_search.py` for file search specific tests
- Adding a `StreamingValidator` helper class to standardize streaming response validation

## Test Plan

Run the tests:

```
pytest -s -v tests/integration/non_ci/responses/ \
   --stack-config=starter \
   --text-model openai/gpt-4o \
   --embedding-model=sentence-transformers/all-MiniLM-L6-v2 \
    -k "client_with_models"
```
2025-08-15 00:05:36 +00:00
Ashwin Bharambe
5f1ddd35e4
chore(tests): refactor and move responses tests away from verifications (#3068)
This PR kills the verifications infrastructure which is no longer used.
It was relocated to the `llama-stack-evals`
(https://github.com/meta-llama/llama-stack-evals) repository previously.

Responses tests used this infrastructure but that wasn't quite
necessary, just a little useful back when @bbrownin introduced the
tests. On Discord, we agreed that tests can be moved to our regular
integrations test infra.

## Test Plan

Some tests currently do fail (although they run!) I will send a
follow-up PR which makes them all pass.
2025-08-07 13:48:16 -07:00
Renamed from tests/verifications/openai_api/fixtures/fixtures.py (Browse further)