llama-stack-mirror/llama_stack/providers
Ben Browning 1dcde0de67 fix: OpenAI Completions API and Fireworks
We were passing a dict into the compat mixin for OpenAI Completions
when using Llama models with Fireworks, and that was breaking some
strong typing code that was added in openai_compat.py. We shouldn't
have been converting these params to a dict in that case anyway, so
this adjusts things to pass the params in as their actual original
types when calling the OpenAIChatCompletionToLlamaStackMixin.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-18 16:06:29 -04:00
..
inline fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
registry fix: use torchao 0.8.0 for inference (#1925) 2025-04-10 13:39:20 -07:00
remote fix: OpenAI Completions API and Fireworks 2025-04-18 16:06:29 -04:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00