llama-stack-mirror/llama_stack/providers/remote/inference/fireworks
Ben Browning 602e949a46
fix: OpenAI Completions API and Fireworks (#1997)
# What does this PR do?

We were passing a dict into the compat mixin for OpenAI Completions when
using Llama models with Fireworks, and that was breaking some strong
typing code that was added in openai_compat.py. We shouldn't have been
converting these params to a dict in that case anyway, so this adjusts
things to pass the params in as their actual original types when calling
the OpenAIChatCompletionToLlamaStackMixin.

## Test Plan

All of the fireworks provider verification tests were failing due to
some OpenAI compatibility cleanup in #1962. The changes in that PR were
good to make, and this just cleans up the fireworks provider code to
stop passing in untyped dicts to some of those `openai_compat.py`
methods since we have the original strongly-typed parameters we can pass
in.

```
llama stack run --image-type venv tests/verifications/openai-api-verification-run.yaml
```

```
python -m pytest -s -v tests/verifications/openai_api/test_chat_completion.py  --provider=fireworks-llama-stack
```

Before this PR, all of the fireworks OpenAI verification tests were
failing. Now, most of them are passing.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-21 11:49:12 -07:00
..
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
config.py feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
fireworks.py fix: OpenAI Completions API and Fireworks (#1997) 2025-04-21 11:49:12 -07:00
models.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00