mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema' Closes https://github.com/BerriAI/litellm/issues/7397 * refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose * refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic enables simpler unit testing for config-based team callbacks * fix(proxy/_types.py): handle teamcallbackmetadata - none values drop none values if present. if all none, use default dict to avoid downstream errors * test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls Fixes https://github.com/BerriAI/litellm/issues/6787 * fix(langfuse_prompt_management.py): add success + failure logging event support * fix: fix linting error * test: fix test * test: fix test * test: override o1 prompt caching - openai currently not working * test: fix test |
||
---|---|---|
.. | ||
adapters | ||
assistants | ||
batch_completion | ||
batches | ||
caching | ||
deprecated_litellm_server | ||
files | ||
fine_tuning | ||
integrations | ||
litellm_core_utils | ||
llms | ||
proxy | ||
realtime_api | ||
rerank_api | ||
router_strategy | ||
router_utils | ||
secret_managers | ||
types | ||
__init__.py | ||
_logging.py | ||
_redis.py | ||
_service_logger.py | ||
_version.py | ||
budget_manager.py | ||
constants.py | ||
cost.json | ||
cost_calculator.py | ||
exceptions.py | ||
main.py | ||
model_prices_and_context_window_backup.json | ||
py.typed | ||
router.py | ||
scheduler.py | ||
timeout.py | ||
utils.py |