LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158)

* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic

* fix(vertex_ai/): support passing custom api base to partner models

Fixes https://github.com/BerriAI/litellm/issues/4317

* fix(proxy_server.py): Fix prometheus premium user check logic

* docs(prometheus.md): update quick start docs

* fix(custom_llm.py): support passing dynamic api key + api base

* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints

Closes https://github.com/BerriAI/litellm/issues/6081

* feat(openai/realtime): add openai realtime api logging

Closes https://github.com/BerriAI/litellm/issues/6081

* fix(realtime_streaming.py): fix linting errors

* fix(realtime_streaming.py): fix linting errors

* fix: fix linting errors

* fix pattern match router

* Add literalai in the sidebar observability category (#6163)

* fix: add literalai in the sidebar

* fix: typo

* update (#6160)

* Feat: Add Langtrace integration (#5341)

* Feat: Add Langtrace integration

* add langtrace service name

* fix timestamps for traces

* add tests

* Discard Callback + use existing otel logger

* cleanup

* remove print statments

* remove callback

* add docs

* docs

* add logging docs

* format logging

* remove emoji and add litellm proxy example

* format logging

* format `logging.md`

* add langtrace docs to logging.md

* sync conflict

* docs fix

* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165)

* fix move s3 to use customLogger

* add basic s3 logging test

* add s3 to custom logger compatible

* use batch logger for s3

* s3 set flush interval and batch size

* fix s3 logging

* add notes on s3 logging

* fix s3 logging

* add basic s3 logging test

* fix s3 type errors

* add test for sync logging on s3

* fix: fix to debug log

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
This commit is contained in:
Krish Dholakia 2024-10-11 23:04:36 -07:00 committed by GitHub
parent 9db4ccca9f
commit 11f9df923a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
28 changed files with 966 additions and 760 deletions

View file

@ -28,7 +28,6 @@ from typing import (
Union,
)
from unittest.mock import AsyncMock, MagicMock, patch
import httpx
from dotenv import load_dotenv
@ -226,6 +225,8 @@ class MyCustomLLM(CustomLLM):
self,
model: str,
prompt: str,
api_key: Optional[str],
api_base: Optional[str],
model_response: ImageResponse,
optional_params: dict,
logging_obj: Any,
@ -242,6 +243,8 @@ class MyCustomLLM(CustomLLM):
self,
model: str,
prompt: str,
api_key: Optional[str],
api_base: Optional[str],
model_response: ImageResponse,
optional_params: dict,
logging_obj: Any,
@ -362,3 +365,31 @@ async def test_simple_image_generation_async():
)
print(resp)
@pytest.mark.asyncio
async def test_image_generation_async_with_api_key_and_api_base():
my_custom_llm = MyCustomLLM()
litellm.custom_provider_map = [
{"provider": "custom_llm", "custom_handler": my_custom_llm}
]
with patch.object(
my_custom_llm, "aimage_generation", new=AsyncMock()
) as mock_client:
try:
resp = await litellm.aimage_generation(
model="custom_llm/my-fake-model",
prompt="Hello world",
api_key="my-api-key",
api_base="my-api-base",
)
print(resp)
except Exception as e:
print(e)
mock_client.assert_awaited_once()
mock_client.call_args.kwargs["api_key"] == "my-api-key"
mock_client.call_args.kwargs["api_base"] == "my-api-base"