llama-stack-mirror/llama_stack/providers/remote/inference/ollama
Ben Browning 8747210470 fix: ollama openai completion and chat completion params
The ollama provider was using an older variant of the code to convert
incoming parameters from the OpenAI API completions and chat
completion endpoints into requests that get sent to the backend
provider over its own OpenAI client. This updates it to use the common
`prepare_openai_completion_params` method used elsewhere, which takes
care of removing stray `None` values even for nested structures.

Without this, some other parameters - even if they have values of None
- make their way to ollama and actually influence its inference output
as opposed to when those parameters are not sent at all.

This passes tests/integration/inference/test_openai_completion.py and
fixes the issue found in #2098, which was tested via manual curl
requests crafted a particular way.

Fixes #2098

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-08 18:20:50 -04:00
..
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
models.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
ollama.py fix: ollama openai completion and chat completion params 2025-05-08 18:20:50 -04:00