The recent refactor (3778a4c3) introduced automatic initialization for
LlamaStackAsLibraryClient but the unit tests were expecting manual initalization and _is_initialized.
This caused test failure.
Changes:
- Update test assertions to check route_impls is not None instead of _is_initialized
- Add proper mocking in tests to avoid external provider dependencies
- Maintain test coverage for automatic initialization behavior
- Ensure backward compatibility testing for deprecated initialize() method
Signed-off-by: Mustafa Elbehery <melbeher@redhat.com>
Currently client.initialize() had to be invoked by user.
To improve dev experience and to avoid runtime errors, this PR init LlamaStackAsLibrary implicitly upon using the client.
It prevents also multiple init of the same client, while maintaining backward ccompatibility.
Signed-off-by: Mustafa Elbehery <melbeher@redhat.com>
# What does this PR do?
Handles MCP tool calls in a previous response
Closes#3105
## Test Plan
Made call to create response with tool call, then made second call with
the first linked through previous_response_id. Did not get error.
Also added unit test.
Signed-off-by: Gordon Sim <gsim@redhat.com>
# What does this PR do?
A _bunch_ on cleanup for the Responses tests.
- Got rid of YAML test cases, moved them to just use simple pydantic models
- Splitting the large monolithic test file into multiple focused test files:
- `test_basic_responses.py` for basic and image response tests
- `test_tool_responses.py` for tool-related tests
- `test_file_search.py` for file search specific tests
- Adding a `StreamingValidator` helper class to standardize streaming response validation
## Test Plan
Run the tests:
```
pytest -s -v tests/integration/non_ci/responses/ \
--stack-config=starter \
--text-model openai/gpt-4o \
--embedding-model=sentence-transformers/all-MiniLM-L6-v2 \
-k "client_with_models"
```
# What does this PR do?
Adds proper streaming events for MCP tool listing (`mcp_list_tools.in_progress` and `mcp_list_tools.completed`). Also refactors things a bit more.
## Test Plan
Verified existing integration tests pass with the refactored code. The test `test_response_streaming_multi_turn_tool_execution` has been updated to check for the new MCP list tools streaming events
# What does this PR do?
Adds content part streaming events to the OpenAI-compatible Responses API to support more granular streaming of response content. This introduces:
1. New schema types for content parts: `OpenAIResponseContentPart` with variants for text output and refusals
2. New streaming event types:
- `OpenAIResponseObjectStreamResponseContentPartAdded` for when content parts begin
- `OpenAIResponseObjectStreamResponseContentPartDone` for when content parts complete
3. Implementation in the reference provider to emit these events during streaming responses. Also emits MCP arguments just like function call ones.
## Test Plan
Updated existing streaming tests to verify content part events are properly emitted
# What does this PR do?
Enhances tool execution streaming by adding support for real-time progress events during tool calls. This implementation adds streaming events for MCP and web search tools, including in-progress, searching, completed, and failed states.
The refactored `_execute_tool_call` method now returns an async iterator that yields streaming events throughout the tool execution lifecycle.
## Test Plan
Updated the integration test `test_response_streaming_multi_turn_tool_execution` to verify the presence and structure of new streaming events, including:
- Checking for MCP in-progress and completed events
- Verifying that progress events contain required fields (item_id, output_index, sequence_number)
- Ensuring completed events have the necessary sequence_number field
Some fixes to MCP tests. And a bunch of fixes for Vector providers.
I also enabled a bunch of Vector IO tests to be used with
`LlamaStackLibraryClient`
## Test Plan
Run Responses tests with llama stack library client:
```
pytest -s -v tests/integration/non_ci/responses/ --stack-config=server:starter \
--text-model openai/gpt-4o \
--embedding-model=sentence-transformers/all-MiniLM-L6-v2 \
-k "client_with_models"
```
Do the same with `-k openai_client`
The rest should be taken care of by CI.
This PR kills the verifications infrastructure which is no longer used.
It was relocated to the `llama-stack-evals`
(https://github.com/meta-llama/llama-stack-evals) repository previously.
Responses tests used this infrastructure but that wasn't quite
necessary, just a little useful back when @bbrownin introduced the
tests. On Discord, we agreed that tests can be moved to our regular
integrations test infra.
## Test Plan
Some tests currently do fail (although they run!) I will send a
follow-up PR which makes them all pass.