llama-stack-mirror/llama_stack
Ben Browning 8064e3d412 chore: Clean up variable names, duplication in openai_responses.py
Some small fixes to clarify variable names so that they more closely
match what they do (input_messages -> input_items) and use an
intermediate variable plus add some code comments about how we
aggregating streaming tool call arguments from the inference provider
when building our response.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-13 09:59:01 -04:00
..
apis Fixes for multi-turn tool calls in Responses API 2025-05-08 16:21:15 -04:00
cli chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
distribution feat: add function tools to openai responses 2025-05-08 07:03:47 -04:00
models fix: llama4 tool use prompt fix (#2103) 2025-05-06 22:18:31 -07:00
providers chore: Clean up variable names, duplication in openai_responses.py 2025-05-13 09:59:01 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00