mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-28 01:41:59 +00:00
The ollama provider was using an older variant of the code to convert incoming parameters from the OpenAI API completions and chat completion endpoints into requests that get sent to the backend provider over its own OpenAI client. This updates it to use the common `prepare_openai_completion_params` method used elsewhere, which takes care of removing stray `None` values even for nested structures. Without this, some other parameters - even if they have values of None - make their way to ollama and actually influence its inference output as opposed to when those parameters are not sent at all. This passes tests/integration/inference/test_openai_completion.py and fixes the issue found in #2098, which was tested via manual curl requests crafted a particular way. Fixes #2098 Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| anthropic | ||
| bedrock | ||
| cerebras | ||
| cerebras_openai_compat | ||
| databricks | ||
| fireworks | ||
| fireworks_openai_compat | ||
| gemini | ||
| groq | ||
| groq_openai_compat | ||
| llama_openai_compat | ||
| nvidia | ||
| ollama | ||
| openai | ||
| passthrough | ||
| runpod | ||
| sambanova | ||
| sambanova_openai_compat | ||
| tgi | ||
| together | ||
| together_openai_compat | ||
| vllm | ||
| watsonx | ||
| __init__.py | ||