Litellm dev 12 24 2024 p2 (#7400)

* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema'

Closes https://github.com/BerriAI/litellm/issues/7397

* refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose

* refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic

enables simpler unit testing for config-based team callbacks

* fix(proxy/_types.py): handle teamcallbackmetadata - none values

drop none values if present. if all none, use default dict to avoid downstream errors

* test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls

Fixes https://github.com/BerriAI/litellm/issues/6787

* fix(langfuse_prompt_management.py): add success + failure logging event support

* fix: fix linting error

* test: fix test

* test: fix test

* test: override o1 prompt caching - openai currently not working

* test: fix test
This commit is contained in:
Krish Dholakia 2024-12-24 20:33:41 -08:00 committed by GitHub
parent d790ba0897
commit c95351e70f
12 changed files with 227 additions and 62 deletions

View file

@ -53,6 +53,8 @@ async def langfuse_proxy_route(
[Docs](https://docs.litellm.ai/docs/pass_through/langfuse)
"""
from litellm.proxy.proxy_server import proxy_config
## CHECK FOR LITELLM API KEY IN THE QUERY PARAMS - ?..key=LITELLM_API_KEY
api_key = request.headers.get("Authorization") or ""
@ -68,7 +70,9 @@ async def langfuse_proxy_route(
)
callback_settings_obj: Optional[TeamCallbackMetadata] = (
_get_dynamic_logging_metadata(user_api_key_dict=user_api_key_dict)
_get_dynamic_logging_metadata(
user_api_key_dict=user_api_key_dict, proxy_config=proxy_config
)
)
dynamic_langfuse_public_key: Optional[str] = None