* build(pyproject.toml): add new dev dependencies - for type checking
* build: reformat files to fit black
* ci: reformat to fit black
* ci(test-litellm.yml): make tests run clear
* build(pyproject.toml): add ruff
* fix: fix ruff checks
* build(mypy/): fix mypy linting errors
* fix(hashicorp_secret_manager.py): fix passing cert for tls auth
* build(mypy/): resolve all mypy errors
* test: update test
* fix: fix black formatting
* build(pre-commit-config.yaml): use poetry run black
* fix(proxy_server.py): fix linting error
* fix: fix ruff safe representation error
* refactor _get_langfuse_input_output_content
* test_langfuse_logging_completion_with_malformed_llm_response
* fix _get_langfuse_input_output_content
* fixes for langfuse linting
* unit testing for get chat/text content for langfuse
* fix _should_raise_content_policy_error
* fix(base_utils.py): supported nested json schema passed in for anthropic calls
* refactor(base_utils.py): refactor ref parsing to prevent infinite loop
* test(test_openai_endpoints.py): refactor anthropic test to use bedrock
* fix(langfuse_prompt_management.py): add unit test for sync langfuse calls
Resolves https://github.com/BerriAI/litellm/issues/7938#issuecomment-2613293757
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url
* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic
deduplicates code
* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages
* docs(prompt_management.md): update prompt management to be in beta
given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)
* refactor(prompt_management_base.py): introduce base class for prompt management
allows consistent behaviour across prompt management integrations
* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base
* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set
allows tracking what prompt was used for what purpose
* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse
allows logging prompt id / prompt variables to langfuse
* test: fix test
* fix(router.py): cleanup unused imports
* fix: fix linting error
* fix: fix trace param typing
* fix: fix linting errors
* fix: fix code qa check
* fix(custom_logger.py): expose new 'async_get_chat_completion_prompt' event hook
* fix(custom_logger.py): langfuse_prompt_management.py
remove 'headers' from custom logger 'async_get_chat_completion_prompt' and 'get_chat_completion_prompt' event hooks
* feat(router.py): expose new function for prompt management based routing
* feat(router.py): partial working router prompt factory logic
allows load balanced model to be used for model name w/ langfuse prompt management call
* feat(router.py): fix prompt management with load balanced model group
* feat(langfuse_prompt_management.py): support reading in openai params from langfuse
enables user to define optional params on langfuse vs. client code
* test(test_Router.py): add unit test for router based langfuse prompt management
* fix: fix linting errors
* fix(langfuse_prompt_management.py): migrate dynamic logging to langfuse custom logger compatible class
* fix(langfuse_prompt_management.py): support failure callback logging to langfuse as well
* feat(proxy_server.py): support setting custom tokenizer on config.yaml
Allows customizing value for `/utils/token_counter`
* fix(proxy_server.py): fix linting errors
* test: skip if file not found
* style: cleanup unused import
* docs(configs.md): add docs on setting custom tokenizer
* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema'
Closes https://github.com/BerriAI/litellm/issues/7397
* refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose
* refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic
enables simpler unit testing for config-based team callbacks
* fix(proxy/_types.py): handle teamcallbackmetadata - none values
drop none values if present. if all none, use default dict to avoid downstream errors
* test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls
Fixes https://github.com/BerriAI/litellm/issues/6787
* fix(langfuse_prompt_management.py): add success + failure logging event support
* fix: fix linting error
* test: fix test
* test: fix test
* test: override o1 prompt caching - openai currently not working
* test: fix test
* feat(langfuse/): support langfuse prompt management
Initial working commit for langfuse prompt management support
Closes https://github.com/BerriAI/litellm/issues/6269
* test: update test
* fix(litellm_logging.py): suppress linting error