litellm/tests/logging_callback_tests/test_unit_tests_init_callbacks.py
Krish Dholakia 3beecfb0d4
LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729)
* fix(utils.py): add logprobs support for together ai

Fixes

https://github.com/BerriAI/litellm/issues/6724

* feat(pass_through_endpoints/): add anthropic/ pass-through endpoint

adds new `anthropic/` pass-through endpoint + refactors docs

* feat(spend_management_endpoints.py): allow /global/spend/report to query team + customer id

enables seeing spend for a customer in a team

* Add integration with MLflow Tracing (#6147)

* Add MLflow logger

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* Streaming handling

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* lint

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* address comments and fix issues

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* address comments and fix issues

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* Move logger construction code

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* Add docs

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* async handlers

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* new picture

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* fix(mlflow.py): fix ruff linting errors

* ci(config.yml): add mlflow to ci testing

* fix: fix test

* test: fix test

* Litellm key update fix (#6710)

* fix(caching): convert arg to equivalent kwargs in llm caching handler

prevent unexpected errors

* fix(caching_handler.py): don't pass args to caching

* fix(caching): remove all *args from caching.py

* fix(caching): consistent function signatures + abc method

* test(caching_unit_tests.py): add unit tests for llm caching

ensures coverage for common caching scenarios across different implementations

* refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one

* fix(router.py): drop redis password requirement

* fix(proxy_server.py): fix faulty slack alerting check

* fix(langfuse.py): avoid copying functions/thread lock objects in metadata

fixes metadata copy error when parent otel span in metadata

* test: update test

* fix(key_management_endpoints.py): fix /key/update with metadata update

* fix(key_management_endpoints.py): fix key_prepare_update helper

* fix(key_management_endpoints.py): reset value to none if set in key update

* fix: update test

'

* Litellm dev 11 11 2024 (#6693)

* fix(__init__.py): add 'watsonx_text' as mapped llm api route

Fixes https://github.com/BerriAI/litellm/issues/6663

* fix(opentelemetry.py): fix passing parallel tool calls to otel

Fixes https://github.com/BerriAI/litellm/issues/6677

* refactor(test_opentelemetry_unit_tests.py): create a base set of unit tests for all logging integrations - test for parallel tool call handling

reduces bugs in repo

* fix(__init__.py): update provider-model mapping to include all known provider-model mappings

Fixes https://github.com/BerriAI/litellm/issues/6669

* feat(anthropic): support passing document in llm api call

* docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function

* fix(factory.py): fix linting error

* add clear doc string for GCS bucket logging

* Add docs to export logs to Laminar (#6674)

* Add docs to export logs to Laminar

* minor fix: newline at end of file

* place laminar after http and grpc

* (Feat) Add langsmith key based logging (#6682)

* add langsmith_api_key to StandardCallbackDynamicParams

* create a file for langsmith types

* langsmith add key / team based logging

* add key based logging for langsmith

* fix langsmith key based logging

* fix linting langsmith

* remove NOQA violation

* add unit test coverage for all helpers in test langsmith

* test_langsmith_key_based_logging

* docs langsmith key based logging

* run langsmith tests in logging callback tests

* fix logging testing

* test_langsmith_key_based_logging

* test_add_callback_via_key_litellm_pre_call_utils_langsmith

* add debug statement langsmith key based logging

* test_langsmith_key_based_logging

* (fix) OpenAI's optional messages[].name  does not work with Mistral API  (#6701)

* use helper for _transform_messages mistral

* add test_message_with_name to base LLMChat test

* fix linting

* add xAI on Admin UI (#6680)

* (docs) add benchmarks on 1K RPS  (#6704)

* docs litellm proxy benchmarks

* docs GCS bucket

* doc fix - reduce clutter on logging doc title

* (feat) add cost tracking stable diffusion 3 on Bedrock  (#6676)

* add cost tracking for sd3

* test_image_generation_bedrock

* fix get model info for image cost

* add cost_calculator for stability 1 models

* add unit testing for bedrock image cost calc

* test_cost_calculator_with_no_optional_params

* add test_cost_calculator_basic

* correctly allow size Optional

* fix cost_calculator

* sd3 unit tests cost calc

* fix raise correct error 404 when /key/info is called on non-existent key  (#6653)

* fix raise correct error on /key/info

* add not_found_error error

* fix key not found in DB error

* use 1 helper for checking token hash

* fix error code on key info

* fix test key gen prisma

* test_generate_and_call_key_info

* test fix test_call_with_valid_model_using_all_models

* fix key info tests

* bump: version 1.52.4 → 1.52.5

* add defaults used for GCS logging

* LiteLLM Minor Fixes & Improvements (11/12/2024)  (#6705)

* fix(caching): convert arg to equivalent kwargs in llm caching handler

prevent unexpected errors

* fix(caching_handler.py): don't pass args to caching

* fix(caching): remove all *args from caching.py

* fix(caching): consistent function signatures + abc method

* test(caching_unit_tests.py): add unit tests for llm caching

ensures coverage for common caching scenarios across different implementations

* refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one

* fix(router.py): drop redis password requirement

* fix(proxy_server.py): fix faulty slack alerting check

* fix(langfuse.py): avoid copying functions/thread lock objects in metadata

fixes metadata copy error when parent otel span in metadata

* test: update test

* bump: version 1.52.5 → 1.52.6

* (feat) helm hook to sync db schema  (#6715)

* v0 migration job

* fix job

* fix migrations job.yml

* handle standalone DB on helm hook

* fix argo cd annotations

* fix db migration helm hook

* fix migration job

* doc fix Using Http/2 with Hypercorn

* (fix proxy redis) Add redis sentinel support  (#6154)

* add sentinel_password support

* add doc for setting redis sentinel password

* fix redis sentinel - use sentinel password

* Fix: Update gpt-4o costs to that of gpt-4o-2024-08-06 (#6714)

Fixes #6713

* (fix) using Anthropic `response_format={"type": "json_object"}`  (#6721)

* add support for response_format=json anthropic

* add test_json_response_format to baseLLM ChatTest

* fix test_litellm_anthropic_prompt_caching_tools

* fix test_anthropic_function_call_with_no_schema

* test test_create_json_tool_call_for_response_format

* (feat) Add cost tracking for Azure Dall-e-3 Image Generation  + use base class to ensure basic image generation tests pass  (#6716)

* add BaseImageGenTest

* use 1 class for unit testing

* add debugging to BaseImageGenTest

* TestAzureOpenAIDalle3

* fix response_cost_calculator

* test_basic_image_generation

* fix img gen basic test

* fix _select_model_name_for_cost_calc

* fix test_aimage_generation_bedrock_with_optional_params

* fix undo changes cost tracking

* fix response_cost_calculator

* fix test_cost_azure_gpt_35

* fix remove dup test (#6718)

* (build) update db helm hook

* (build) helm db pre sync hook

* (build) helm db sync hook

* test: run test_team_logging firdst

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de>

* test: update test

* test: skip anthropic overloaded error

* test: cleanup test

* test: update tests

* test: fix test

* test: handle gemini overloaded model error

* test: handle internal server error

* test: handle anthropic overloaded error

* test: handle claude instability

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de>
2024-11-15 11:18:31 +05:30

218 lines
7.5 KiB
Python

import json
import os
import sys
from datetime import datetime
from unittest.mock import AsyncMock
from pydantic.main import Model
sys.path.insert(
0, os.path.abspath("../..")
) # Adds the parent directory to the system-path
from typing import Literal
import pytest
import litellm
import asyncio
import logging
from litellm._logging import verbose_logger
from prometheus_client import REGISTRY, CollectorRegistry
from litellm.integrations.lago import LagoLogger
from litellm.integrations.openmeter import OpenMeterLogger
from litellm.integrations.braintrust_logging import BraintrustLogger
from litellm.integrations.galileo import GalileoObserve
from litellm.integrations.langsmith import LangsmithLogger
from litellm.integrations.literal_ai import LiteralAILogger
from litellm.integrations.prometheus import PrometheusLogger
from litellm.integrations.datadog.datadog import DataDogLogger
from litellm.integrations.datadog.datadog_llm_obs import DataDogLLMObsLogger
from litellm.integrations.gcs_bucket.gcs_bucket import GCSBucketLogger
from litellm.integrations.opik.opik import OpikLogger
from litellm.integrations.opentelemetry import OpenTelemetry
from litellm.integrations.mlflow import MlflowLogger
from litellm.integrations.argilla import ArgillaLogger
from litellm.proxy.hooks.dynamic_rate_limiter import _PROXY_DynamicRateLimitHandler
from unittest.mock import patch
# clear prometheus collectors / registry
collectors = list(REGISTRY._collector_to_names.keys())
for collector in collectors:
REGISTRY.unregister(collector)
######################################
callback_class_str_to_classType = {
"lago": LagoLogger,
"openmeter": OpenMeterLogger,
"braintrust": BraintrustLogger,
"galileo": GalileoObserve,
"langsmith": LangsmithLogger,
"literalai": LiteralAILogger,
"prometheus": PrometheusLogger,
"datadog": DataDogLogger,
"datadog_llm_observability": DataDogLLMObsLogger,
"gcs_bucket": GCSBucketLogger,
"opik": OpikLogger,
"argilla": ArgillaLogger,
"opentelemetry": OpenTelemetry,
# OTEL compatible loggers
"logfire": OpenTelemetry,
"arize": OpenTelemetry,
"langtrace": OpenTelemetry,
"mlflow": MlflowLogger,
}
expected_env_vars = {
"LAGO_API_KEY": "api_key",
"LAGO_API_BASE": "mock_base",
"LAGO_API_EVENT_CODE": "mock_event_code",
"OPENMETER_API_KEY": "openmeter_api_key",
"BRAINTRUST_API_KEY": "braintrust_api_key",
"GALILEO_API_KEY": "galileo_api_key",
"LITERAL_API_KEY": "literal_api_key",
"DD_API_KEY": "datadog_api_key",
"DD_SITE": "datadog_site",
"GOOGLE_APPLICATION_CREDENTIALS": "gcs_credentials",
"OPIK_API_KEY": "opik_api_key",
"LANGTRACE_API_KEY": "langtrace_api_key",
"LOGFIRE_TOKEN": "logfire_token",
"ARIZE_SPACE_KEY": "arize_space_key",
"ARIZE_API_KEY": "arize_api_key",
"ARGILLA_API_KEY": "argilla_api_key",
}
def reset_all_callbacks():
litellm.callbacks = []
litellm.input_callback = []
litellm.success_callback = []
litellm.failure_callback = []
litellm._async_success_callback = []
litellm._async_failure_callback = []
initial_env_vars = {}
def init_env_vars():
for env_var, value in expected_env_vars.items():
if env_var not in os.environ:
os.environ[env_var] = value
else:
initial_env_vars[env_var] = os.environ[env_var]
def reset_env_vars():
for env_var, value in initial_env_vars.items():
os.environ[env_var] = value
all_callback_required_env_vars = []
async def use_callback_in_llm_call(
callback: str, used_in: Literal["callbacks", "success_callback"]
):
if callback == "dynamic_rate_limiter":
# internal CustomLogger class that expects internal_usage_cache passed to it, it always fails when tested in this way
return
elif callback == "argilla":
litellm.argilla_transformation_object = {}
elif callback == "openmeter":
# it's currently handled in jank way, TODO: fix openmete and then actually run it's test
return
elif callback == "prometheus":
# pytest teardown - clear existing prometheus collectors
collectors = list(REGISTRY._collector_to_names.keys())
for collector in collectors:
REGISTRY.unregister(collector)
# Mock the httpx call for Argilla dataset retrieval
if callback == "argilla":
import httpx
mock_response = httpx.Response(
status_code=200, json={"items": [{"id": "mocked_dataset_id"}]}
)
patch.object(
litellm.module_level_client, "get", return_value=mock_response
).start()
# Mock the httpx call for Argilla dataset retrieval
if callback == "argilla":
import httpx
mock_response = httpx.Response(
status_code=200, json={"items": [{"id": "mocked_dataset_id"}]}
)
patch.object(
litellm.module_level_client, "get", return_value=mock_response
).start()
if used_in == "callbacks":
litellm.callbacks = [callback]
elif used_in == "success_callback":
litellm.success_callback = [callback]
for _ in range(5):
await litellm.acompletion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "hi"}],
temperature=0.1,
mock_response="hello",
)
await asyncio.sleep(0.5)
expected_class = callback_class_str_to_classType[callback]
if used_in == "callbacks":
assert isinstance(litellm._async_success_callback[0], expected_class)
assert isinstance(litellm._async_failure_callback[0], expected_class)
assert isinstance(litellm.success_callback[0], expected_class)
assert isinstance(litellm.failure_callback[0], expected_class)
assert len(litellm._async_success_callback) == 1
assert len(litellm._async_failure_callback) == 1
assert len(litellm.success_callback) == 1
assert len(litellm.failure_callback) == 1
assert len(litellm.callbacks) == 1
elif used_in == "success_callback":
print(f"litellm.success_callback: {litellm.success_callback}")
print(f"litellm._async_success_callback: {litellm._async_success_callback}")
assert isinstance(litellm.success_callback[1], expected_class)
assert len(litellm.success_callback) == 2 # ["lago", LagoLogger]
assert isinstance(litellm._async_success_callback[0], expected_class)
assert len(litellm._async_success_callback) == 1
# TODO also assert that it's not set for failure_callback
# As of Oct 21 2024, it's currently set
# 1st hoping to add test coverage for just setting in success_callback/_async_success_callback
if callback == "argilla":
patch.stopall()
if callback == "argilla":
patch.stopall()
@pytest.mark.asyncio
async def test_init_custom_logger_compatible_class_as_callback():
init_env_vars()
# used like litellm.callbacks = ["prometheus"]
for callback in litellm._known_custom_logger_compatible_callbacks:
print(f"Testing callback: {callback}")
reset_all_callbacks()
await use_callback_in_llm_call(callback, used_in="callbacks")
# used like this litellm.success_callback = ["prometheus"]
for callback in litellm._known_custom_logger_compatible_callbacks:
print(f"Testing callback: {callback}")
reset_all_callbacks()
await use_callback_in_llm_call(callback, used_in="success_callback")
reset_env_vars()