* test(test_get_model_info.py): add unit test confirming router deployment updates global 'get_model_info'
* fix(get_supported_openai_params.py): fix custom llm provider 'get_supported_openai_params'
Fixes https://github.com/BerriAI/litellm/issues/7668
* docs(azure.md): clarify how azure ad token refresh on proxy works
Closes https://github.com/BerriAI/litellm/issues/7665
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url
* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic
deduplicates code
* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages
* docs(prompt_management.md): update prompt management to be in beta
given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)
* refactor(prompt_management_base.py): introduce base class for prompt management
allows consistent behaviour across prompt management integrations
* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base
* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set
allows tracking what prompt was used for what purpose
* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse
allows logging prompt id / prompt variables to langfuse
* test: fix test
* fix(router.py): cleanup unused imports
* fix: fix linting error
* fix: fix trace param typing
* fix: fix linting errors
* fix: fix code qa check
* fix(main.py): fix lm_studio/ embedding routing
adds the mapping + updates docs with example
* docs(self_serve.md): update doc to show how to auto-add sso users to teams
* fix(streaming_handler.py): simplify async iterator check, to just check if streaming response is an async iterable
* fix working build from pip
* add tests for proxy_build_from_pip_tests
* doc clean up for deployment
* docs cleanup
* docs build from pip
* fix cd docker/build_from_pip
* docs(friendliai.md): update FriendliAI documentation and model details
* docs(friendliai.md): remove unused imports for cleaner documentation
* feat: add support for parallel function calling, system messages, and response schema in model configuration
* feat(router.py): support request prioritization for text completion calls
* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`
Fixes https://github.com/BerriAI/litellm/issues/7485
* fix: fix linting errors
* fix: fix linting error
* test(test_router_helper_utils.py): add direct test for '_schedule_factory'
Fixes code qa test
* test(test_utils.py): initial test for valid models
Addresses https://github.com/BerriAI/litellm/issues/7525
* fix: test
* feat(fireworks_ai/transformation.py): support retrieving valid models from fireworks ai endpoint
* refactor(fireworks_ai/): support checking model info on `/v1/models` route
* docs(set_keys.md): update docs to clarify check llm provider api usage
* fix(watsonx/common_utils.py): support 'WATSONX_ZENAPIKEY' for iam auth
* fix(watsonx): read in watsonx token from env var
* fix: fix linting errors
* fix(utils.py): fix provider config check
* style: cleanup unused imports
* feat(deepgram/transformation.py): support reading in deepgram api base from env var
* fix(litellm_logging.py): make skipping log message a .info
easier to see
* docs(logging.md): add doc on turn off all tracking/logging for a request
* refactor(prometheus.py): refactor to remove `_tag` metrics and incorporate in regular metrics
* fix(prometheus.py): handle label values not set in enum values
* feat(prometheus.py): working e2e custom metadata labels
* docs(prometheus.md): update docs to clarify how custom metrics would work
* test(test_prometheus_unit_tests.py): fix test
* test: add unit testing
* fix(prometheus.py): refactor litellm_input_tokens_metric to use label factory
makes adding new metrics easier
* feat(prometheus.py): add 'request_model' to 'litellm_input_tokens_metric'
* refactor(prometheus.py): refactor 'litellm_output_tokens_metric' to use label factory
makes adding new metrics easier
* feat(prometheus.py): emit requested model in 'litellm_output_tokens_metric'
* feat(prometheus.py): support tracking success events with custom metrics
* refactor(prometheus.py): refactor '_set_latency_metrics' to just use the initially created enum values dictionary
reduces scope for missing values
* feat(prometheus.py): refactor all tags to support custom metadata tags
enables metadata tags to be used across for e2e tracking
* fix(prometheus.py): fix requested model on success event enum_values
* test: fix test
* test: fix test
* test: handle filenotfound error
* docs(prometheus.md): add new values to prometheus
* docs(prometheus.md): document adding custom metrics on prometheus
* bump: version 1.56.5 → 1.56.6
* fix(langfuse_prompt_management.py): migrate dynamic logging to langfuse custom logger compatible class
* fix(langfuse_prompt_management.py): support failure callback logging to langfuse as well
* feat(proxy_server.py): support setting custom tokenizer on config.yaml
Allows customizing value for `/utils/token_counter`
* fix(proxy_server.py): fix linting errors
* test: skip if file not found
* style: cleanup unused import
* docs(configs.md): add docs on setting custom tokenizer
* refactor(utils.py): migrate amazon titan config to base config
* refactor(utils.py): refactor bedrock meta invoke model translation to use base config
* refactor(utils.py): move bedrock ai21 to base config
* refactor(utils.py): move bedrock cohere to base config
* refactor(utils.py): move bedrock mistral to use base config
* refactor(utils.py): move all provider optional param translations to using a config
* docs(clientside_auth.md): clarify how to pass vertex region to litellm proxy
* fix(utils.py): handle scenario where custom llm provider is none / empty
* fix: fix get config
* test(test_otel_load_tests.py): widen perf margin
* fix(utils.py): fix get provider config check to handle custom llm's
* fix(utils.py): fix check