llama-stack-mirror/llama_stack/providers
Ashwin Bharambe e039b61d26
feat(responses)!: add in_progress, failed, content part events (#3765)
## Summary
- add schema + runtime support for response.in_progress /
response.failed / response.incomplete
- stream content parts with proper indexes and reasoning slots
- align tests + docs with the richer event payloads

## Testing
- uv run pytest
tests/unit/providers/agents/meta_reference/test_openai_responses.py::test_create_openai_response_with_string_input
- uv run pytest
tests/unit/providers/agents/meta_reference/test_response_conversion_utils.py
2025-10-10 07:27:34 -07:00
..
inline feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
registry fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
remote fix: allow skipping model availability check for vLLM (#3739) 2025-10-10 07:23:13 -07:00
utils feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00