llama-stack-mirror/llama_stack/providers/remote/inference
Francisco Arceo 554ada57b0
chore: Add OpenAI compatibility for Ollama embeddings (#2440)
# What does this PR do?
This PR adds OpenAI compatibility for Ollama embeddings. Closes
https://github.com/meta-llama/llama-stack/issues/2428

Summary of changes:
- `llama_stack/providers/remote/inference/ollama/ollama.py`
- Implements the OpenAI embeddings endpoint for Ollama, replacing the
NotImplementedError with a full function that validates the model,
prepares parameters, calls the client, encodes embedding data
(optionally in base64), and returns a correctly structured response.
- Updates import statements to include the new embedding response
utilities.

- `llama_stack/providers/utils/inference/litellm_openai_mixin.py`
- Refactors the embedding data encoding logic to use a new shared
utility (`b64_encode_openai_embeddings_response`) instead of inline
base64 encoding and packing logic.
   - Cleans up imports accordingly.

- `llama_stack/providers/utils/inference/openai_compat.py`
- Adds `b64_encode_openai_embeddings_response` to handle encoding OpenAI
embedding outputs (including base64 support) in a reusable way.
- Adds `prepare_openai_embeddings_params` utility for standardizing
embedding parameter preparation.
   - Updates imports to include the new embedding data class.

- `tests/integration/inference/test_openai_embeddings.py`
- Removes `"remote::ollama"` from the list of providers that skip OpenAI
embeddings tests, since support is now implemented.

## Note

There was one minor issue, which required me to override the
`OpenAIEmbeddingsResponse.model` name with
`self._get_model(model).identifier` name, which is very unsatisfying.

## Test Plan
Unit Tests and integration tests

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-06-13 14:28:51 -04:00
..
anthropic chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
bedrock feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
cerebras feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
cerebras_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
databricks feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
fireworks fix: fireworks provider for openai compat inference endpoint (#2335) 2025-06-02 14:11:15 -07:00
fireworks_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
gemini chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
groq chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
groq_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
llama_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
nvidia feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
ollama chore: Add OpenAI compatibility for Ollama embeddings (#2440) 2025-06-13 14:28:51 -04:00
openai feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
passthrough feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
runpod feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
sambanova fix(providers): update sambanova json schema mode (#2306) 2025-05-29 09:54:23 -07:00
sambanova_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
tgi feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
together feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
together_openai_compat feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
vllm feat: To add health status check for remote VLLM (#2303) 2025-06-06 15:33:12 -04:00
watsonx feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00