llama-stack-mirror/llama_stack/providers/utils/inference
Ben Browning 49148bb26a fix: openai_compat messages system/assistant non-str content
When converting OpenAI message content for the "system" and
"assistant" roles to Llama Stack inference APIs (used for some
providers when dealing with Llama models via OpenAI API requests to
get proper prompt / tool handling), we were not properly converting
any non-string content.

I discovered this while running the new Responses AI verification
suite against the Fireworks provider, but instead of fixing it as part
of some ongoing work there split this out into a separate PR.

This fixes that, by using the `openai_content_to_content` helper we
used elsewhere to ensure content parts were mapped properly.

I added a couple of new tests to `test_openai_compat` to reproduce
this issue and validate its fix. I ran those as below:

```
python -m pytest -s -v tests/unit/providers/utils/inference/test_openai_compat.py
```

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-05-02 15:31:22 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
litellm_openai_mixin.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
model_registry.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
openai_compat.py fix: openai_compat messages system/assistant non-str content 2025-05-02 15:31:22 -04:00
prompt_adapter.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00