* fix(__init__.py): fix init to exclude pricing-only model cost values from real model names
prevents bad health checks on wildcard routes
* fix(get_llm_provider.py): fix to handle calling bedrock_converse models
* feat(langfuse.py): log the used prompt when prompt management used
* test: fix test
* docs(self_serve.md): add doc on restricting personal key creation on ui
* feat(s3.py): support s3 logging with team alias prefixes (if available)
New preview feature
* fix(main.py): remove old if block - simplify to just await if coroutine returned
fixes lm_studio async embedding error
* fix(langfuse.py): handle get prompt check
* test(test_basic_python_version.py): assert all optional dependencies are marked as extras on poetry
Fixes https://github.com/BerriAI/litellm/issues/7677
* docs(secret.md): clarify 'read_and_write' secret manager usage on aws
* docs(secret.md): fix doc
* build(ui/teams.tsx): add edit/delete button for updating user / team membership on ui
allows updating user role to admin on ui
* build(ui/teams.tsx): display edit member component on ui, when edit button on member clicked
* feat(team_endpoints.py): support updating team member role to admin via api endpoints
allows team member to become admin post-add
* build(ui/user_dashboard.tsx): if team admin - show all team keys
Fixes https://github.com/BerriAI/litellm/issues/7650
* test(config.yml): add tomli to ci/cd
* test: don't call python_basic_testing in local testing (covered by python 3.13 testing)
* test(test_get_model_info.py): add unit test confirming router deployment updates global 'get_model_info'
* fix(get_supported_openai_params.py): fix custom llm provider 'get_supported_openai_params'
Fixes https://github.com/BerriAI/litellm/issues/7668
* docs(azure.md): clarify how azure ad token refresh on proxy works
Closes https://github.com/BerriAI/litellm/issues/7665
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url
* refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic
deduplicates code
* fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages
* docs(prompt_management.md): update prompt management to be in beta
given feedback - this still needs to be revised (e.g. passing in user message, not ignoring)
* refactor(prompt_management_base.py): introduce base class for prompt management
allows consistent behaviour across prompt management integrations
* feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base
* fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set
allows tracking what prompt was used for what purpose
* feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse
allows logging prompt id / prompt variables to langfuse
* test: fix test
* fix(router.py): cleanup unused imports
* fix: fix linting error
* fix: fix trace param typing
* fix: fix linting errors
* fix: fix code qa check
* fix(main.py): fix lm_studio/ embedding routing
adds the mapping + updates docs with example
* docs(self_serve.md): update doc to show how to auto-add sso users to teams
* fix(streaming_handler.py): simplify async iterator check, to just check if streaming response is an async iterable
* feat(ui_sso.py): support reading team ids from sso token
* feat(ui_sso.py): working upsert sso user teams membership in litellm - if team exists
Adds user to relevant teams, if user is part of teams and team exists on litellm
* fix(ui_sso.py): safely handle add team member task
* build(ui/): support setting team id when creating team on UI
* build(ui/): teams.tsx
allow setting team id on ui
* build(circle_ci/requirements.txt): add fastapi-sso to ci/cd testing
* fix: fix linting errors
* fix(streaming_chunk_builder_utils.py): add test for groq tool calling + streaming + combine chunks
Addresses https://github.com/BerriAI/litellm/issues/7621
* fix(streaming_utils.py): fix modelresponseiterator for openai like chunk parser
ensures chunk parser uses the correct tool call id when translating the chunk
Fixes https://github.com/BerriAI/litellm/issues/7621
* build(model_hub.tsx): display cost pricing on model hub
* build(model_hub.tsx): show cost per token pricing + complete model information
* fix(types/utils.py): fix usage object handling
* feat(cost_calculator.py): add cost tracking ($0) for openai moderations endpoint
removes sentry cost tracking errors caused by this
* build(teams.tsx): allow assigning teams to orgs
* build(ui/): update ui
* fix: drop unsupported non-whitespace characters for real when calling… (#7484)
* fix: drop unsupported non-whitespace characters for real when calling anthropic with stop sequences
* test: add parameterized test for _map_stop_sequences method in AnthropicConfig
---------
Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
* fix working build from pip
* add tests for proxy_build_from_pip_tests
* doc clean up for deployment
* docs cleanup
* docs build from pip
* fix cd docker/build_from_pip
* fix(custom_logger.py): expose new 'async_get_chat_completion_prompt' event hook
* fix(custom_logger.py): langfuse_prompt_management.py
remove 'headers' from custom logger 'async_get_chat_completion_prompt' and 'get_chat_completion_prompt' event hooks
* feat(router.py): expose new function for prompt management based routing
* feat(router.py): partial working router prompt factory logic
allows load balanced model to be used for model name w/ langfuse prompt management call
* feat(router.py): fix prompt management with load balanced model group
* feat(langfuse_prompt_management.py): support reading in openai params from langfuse
enables user to define optional params on langfuse vs. client code
* test(test_Router.py): add unit test for router based langfuse prompt management
* fix: fix linting errors