llama-stack-mirror/llama_stack/providers
Ben Browning 8064e3d412 chore: Clean up variable names, duplication in openai_responses.py
Some small fixes to clarify variable names so that they more closely
match what they do (input_messages -> input_items) and use an
intermediate variable plus add some code comments about how we
aggregating streaming tool call arguments from the inference provider
when building our response.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-13 09:59:01 -04:00
..
inline chore: Clean up variable names, duplication in openai_responses.py 2025-05-13 09:59:01 -04:00
registry feat(providers): sambanova updated to use LiteLLM openai-compat (#1596) 2025-05-06 16:50:22 -07:00
remote feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
tests chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00