llama-stack-mirror/llama_stack/providers
Ben Browning 52a69f0bf9 Extract some helper methods out in openai_responses impl
This extracts out a helper message to convert previous responses to
messages and to convert openai choices (from a chat completion
response) into output messages for the OpenAI Responses output.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-28 10:37:33 -07:00
..
inline Extract some helper methods out in openai_responses impl 2025-04-28 10:37:33 -07:00
registry Stub in an initial OpenAI Responses API 2025-04-28 10:37:33 -07:00
remote fix: updated watsonx inference chat apis with new repo changes (#2033) 2025-04-26 10:17:52 -07:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils feat: new system prompt for llama4 (#2031) 2025-04-25 11:29:08 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00