llama-stack-mirror/llama_stack/providers/inline
Ashwin Bharambe e039b61d26
feat(responses)!: add in_progress, failed, content part events (#3765)
## Summary
- add schema + runtime support for response.in_progress /
response.failed / response.incomplete
- stream content parts with proper indexes and reasoning slots
- align tests + docs with the richer event payloads

## Testing
- uv run pytest
tests/unit/providers/agents/meta_reference/test_openai_responses.py::test_create_openai_response_with_string_input
- uv run pytest
tests/unit/providers/agents/meta_reference/test_response_conversion_utils.py
2025-10-10 07:27:34 -07:00
..
agents feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00
batches feat(batches, completions): add /v1/completions support to /v1/batches (#3309) 2025-09-05 11:59:57 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
eval feat: update eval runner to use openai endpoints (#3588) 2025-09-29 13:13:53 -07:00
files/localfs feat(tests): make inference_recorder into api_recorder (include tool_invoke) (#3403) 2025-10-09 14:27:51 -07:00
inference fix: update dangling references to llama download command (#3763) 2025-10-09 18:35:02 -07:00
ios/inference feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
post_training fix: update dangling references to llama download command (#3763) 2025-10-09 18:35:02 -07:00
safety feat: use /v1/chat/completions for safety model inference (#3591) 2025-09-30 11:01:44 -07:00
scoring chore: use openai_chat_completion for llm as a judge scoring (#3635) 2025-10-01 09:44:31 -04:00
telemetry chore: Remove debug logging from telemetry adapter (#3643) 2025-10-01 15:16:23 -07:00
tool_runtime chore: remove dead code (#3729) 2025-10-07 20:26:02 -07:00
vector_io chore: fix flaky unit test and add proper shutdown for file batches (#3725) 2025-10-07 14:23:14 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00