llama-stack-mirror/llama_stack/providers/utils/inference
Francisco Arceo 554ada57b0
chore: Add OpenAI compatibility for Ollama embeddings (#2440)
# What does this PR do?
This PR adds OpenAI compatibility for Ollama embeddings. Closes
https://github.com/meta-llama/llama-stack/issues/2428

Summary of changes:
- `llama_stack/providers/remote/inference/ollama/ollama.py`
- Implements the OpenAI embeddings endpoint for Ollama, replacing the
NotImplementedError with a full function that validates the model,
prepares parameters, calls the client, encodes embedding data
(optionally in base64), and returns a correctly structured response.
- Updates import statements to include the new embedding response
utilities.

- `llama_stack/providers/utils/inference/litellm_openai_mixin.py`
- Refactors the embedding data encoding logic to use a new shared
utility (`b64_encode_openai_embeddings_response`) instead of inline
base64 encoding and packing logic.
   - Cleans up imports accordingly.

- `llama_stack/providers/utils/inference/openai_compat.py`
- Adds `b64_encode_openai_embeddings_response` to handle encoding OpenAI
embedding outputs (including base64 support) in a reusable way.
- Adds `prepare_openai_embeddings_params` utility for standardizing
embedding parameter preparation.
   - Updates imports to include the new embedding data class.

- `tests/integration/inference/test_openai_embeddings.py`
- Removes `"remote::ollama"` from the list of providers that skip OpenAI
embeddings tests, since support is now implemented.

## Note

There was one minor issue, which required me to override the
`OpenAIEmbeddingsResponse.model` name with
`self._get_model(model).identifier` name, which is very unsatisfying.

## Test Plan
Unit Tests and integration tests

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-06-13 14:28:51 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
inference_store.py feat: implement get chat completions APIs (#2200) 2025-05-21 22:21:52 -07:00
litellm_openai_mixin.py chore: Add OpenAI compatibility for Ollama embeddings (#2440) 2025-06-13 14:28:51 -04:00
model_registry.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
openai_compat.py chore: Add OpenAI compatibility for Ollama embeddings (#2440) 2025-06-13 14:28:51 -04:00
prompt_adapter.py chore: more mypy fixes (#2029) 2025-05-06 09:52:31 -07:00
stream_utils.py feat: implement get chat completions APIs (#2200) 2025-05-21 22:21:52 -07:00