llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
Ben Browning 8064e3d412 chore: Clean up variable names, duplication in openai_responses.py
Some small fixes to clarify variable names so that they more closely
match what they do (input_messages -> input_items) and use an
intermediate variable plus add some code comments about how we
aggregating streaming tool call arguments from the inference provider
when building our response.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-13 09:59:01 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
agent_instance.py feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
agents.py fix: Restore previous responses to input list, not messages 2025-05-08 07:03:47 -04:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
openai_responses.py chore: Clean up variable names, duplication in openai_responses.py 2025-05-13 09:59:01 -04:00
persistence.py feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
safety.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00