llama-stack-mirror/llama_stack/providers
Ben Browning a4b573d750 Fix OpenAI API response format handling
This fixes the schema of OpenAI API chat completion response formats,
including how those response formats (and other nested parameters in
the chat completion request) get translated into paramters for calls
to the backend OpenAI-compatible providers.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-13 13:39:56 -04:00
..
inline fix: misc fixes for tests kill horrible warnings 2025-04-12 17:12:11 -07:00
registry fix: use torchao 0.8.0 for inference (#1925) 2025-04-10 13:39:20 -07:00
remote Fix OpenAI API response format handling 2025-04-13 13:39:56 -04:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils Fix OpenAI API response format handling 2025-04-13 13:39:56 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00