mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-21 16:07:16 +00:00
# Problem The current inline provider appends the user provided instructions to messages as a system prompt, but the returned response object does not contain the instructions field (as specified in the OpenAI responses spec). # What does this PR do? This pull request adds the instruction field to the response object definition and updates the inline provider. It also ensures that instructions from previous response is not carried over to the next response (as specified in the openAI spec). Closes #[3566](https://github.com/llamastack/llama-stack/issues/3566) ## Test Plan - Tested manually for change in model response w.r.t supplied instructions field. - Added unit test to check that the instructions from previous response is not carried over to the next response. - Added integration tests to check instructions parameter in the returned response object. - Added new recordings for the integration tests. --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> |
||
---|---|---|
.. | ||
img | ||
providers/vector_io | ||
deprecated-llama-stack-spec.html | ||
deprecated-llama-stack-spec.yaml | ||
experimental-llama-stack-spec.html | ||
experimental-llama-stack-spec.yaml | ||
llama-stack-spec.html | ||
llama-stack-spec.yaml | ||
remote_or_local.gif | ||
safety_system.webp | ||
site.webmanifest | ||
stainless-llama-stack-spec.html | ||
stainless-llama-stack-spec.yaml |