litellm-mirror/litellm/llms/ollama/common_utils.py
Ishaan Jaff 7a5dd29fe0
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 46s
(fix) unable to pass input_type parameter to Voyage AI embedding mode (#7276)
* VoyageEmbeddingConfig

* fix voyage logic to get params

* add voyage embedding transformation

* add get_provider_embedding_config

* use BaseEmbeddingConfig

* voyage clean up

* use llm http handler for embedding transformations

* test_voyage_ai_embedding_extra_params

* add voyage async

* test_voyage_ai_embedding_extra_params

* add async for llm http handler

* update BaseLLMEmbeddingTest

* test_voyage_ai_embedding_extra_params

* fix linting

* fix get_provider_embedding_config

* fix anthropic text test

* update location of base/chat/transformation

* fix import path

* fix IBMWatsonXAIConfig
2024-12-17 19:23:49 -08:00

45 lines
1.2 KiB
Python

from typing import Union
import httpx
from litellm.llms.base_llm.chat.transformation import BaseLLMException
class OllamaError(BaseLLMException):
def __init__(
self, status_code: int, message: str, headers: Union[dict, httpx.Headers]
):
super().__init__(status_code=status_code, message=message, headers=headers)
def _convert_image(image):
"""
Convert image to base64 encoded image if not already in base64 format
If image is already in base64 format AND is a jpeg/png, return it
If image is not JPEG/PNG, convert it to JPEG base64 format
"""
import base64
import io
try:
from PIL import Image
except Exception:
raise Exception(
"ollama image conversion failed please run `pip install Pillow`"
)
orig = image
if image.startswith("data:"):
image = image.split(",")[-1]
try:
image_data = Image.open(io.BytesIO(base64.b64decode(image)))
if image_data.format in ["JPEG", "PNG"]:
return image
except Exception:
return orig
jpeg_image = io.BytesIO()
image_data.convert("RGB").save(jpeg_image, "JPEG")
jpeg_image.seek(0)
return base64.b64encode(jpeg_image.getvalue()).decode("utf-8")