llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference/responses
ehhuang 80d58ab519
chore: refactor (chat)completions endpoints to use shared params struct (#3761)
# What does this PR do?

Converts openai(_chat)_completions params to pydantic BaseModel to
reduce code duplication across all providers.

## Test Plan
CI









---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/llamastack/llama-stack/pull/3761).
* #3777
* __->__ #3761
2025-10-10 15:46:34 -07:00
..
__init__.py chore(responses): Refactor Responses Impl to be civilized (#3138) 2025-08-15 00:05:35 +00:00
openai_responses.py feat: Add support for Conversations in Responses API (#3743) 2025-10-10 11:57:40 -07:00
streaming.py chore: refactor (chat)completions endpoints to use shared params struct (#3761) 2025-10-10 15:46:34 -07:00
tool_executor.py fix: add traces for tool calls and mcp tool listing (#3722) 2025-10-09 09:59:09 -07:00
types.py feat: reuse previous mcp tool listings where possible (#3710) 2025-10-10 09:28:25 -07:00
utils.py feat(responses)!: add in_progress, failed, content part events (#3765) 2025-10-10 07:27:34 -07:00