llama-stack-mirror/tests/unit/providers
Shabana Baig add64e8e2a
feat: Add instructions parameter in response object (#3741)
# Problem
The current inline provider appends the user provided instructions to
messages as a system prompt, but the returned response object does not
contain the instructions field (as specified in the OpenAI responses
spec).

# What does this PR do?
This pull request adds the instruction field to the response object
definition and updates the inline provider. It also ensures that
instructions from previous response is not carried over to the next
response (as specified in the openAI spec).

Closes #[3566](https://github.com/llamastack/llama-stack/issues/3566)

## Test Plan

- Tested manually for change in model response w.r.t supplied
instructions field.
- Added unit test to check that the instructions from previous response
is not carried over to the next response.
- Added integration tests to check instructions parameter in the
returned response object.
- Added new recordings for the integration tests.

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-20 13:10:37 -07:00
..
agent feat: Add support for Conversations in Responses API (#3743) 2025-10-10 11:57:40 -07:00
agents feat: Add instructions parameter in response object (#3741) 2025-10-20 13:10:37 -07:00
batches feat: Add /v1/embeddings endpoint to batches API (#3384) 2025-10-10 13:25:58 -07:00
files feat(files): fix expires_after API shape (#3604) 2025-09-29 21:29:15 -07:00
inference fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
inline feat: Add responses and safety impl extra_body (#3781) 2025-10-15 15:01:37 -07:00
nvidia fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
utils fix(openai_mixin): no yelling for model listing if API keys are not provided (#3826) 2025-10-16 10:12:13 -07:00
vector_io fix(vector-io): handle missing document_id in insert_chunks (#3521) 2025-10-15 11:02:48 -07:00
test_bedrock.py fix: AWS Bedrock inference profile ID conversion for region-specific endpoints (#3386) 2025-09-11 11:41:53 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00