llama-stack-mirror/llama_stack
Ben Browning 1dcde0de67 fix: OpenAI Completions API and Fireworks
We were passing a dict into the compat mixin for OpenAI Completions
when using Llama models with Fireworks, and that was breaking some
strong typing code that was added in openai_compat.py. We shouldn't
have been converting these params to a dict in that case anyway, so
this adjusts things to pass the params in as their actual original
types when calling the OpenAIChatCompletionToLlamaStackMixin.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-18 16:06:29 -04:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat: allow building distro with external providers (#1967) 2025-04-18 17:18:28 +02:00
distribution feat: allow building distro with external providers (#1967) 2025-04-18 17:18:28 +02:00
models fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
providers fix: OpenAI Completions API and Fireworks 2025-04-18 16:06:29 -04:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: Add tips for debugging remote vLLM provider (#1992) 2025-04-18 14:47:47 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00