* feat(ui_sso.py): support reading team ids from sso token
* feat(ui_sso.py): working upsert sso user teams membership in litellm - if team exists
Adds user to relevant teams, if user is part of teams and team exists on litellm
* fix(ui_sso.py): safely handle add team member task
* build(ui/): support setting team id when creating team on UI
* build(ui/): teams.tsx
allow setting team id on ui
* build(circle_ci/requirements.txt): add fastapi-sso to ci/cd testing
* fix: fix linting errors
* fix(streaming_chunk_builder_utils.py): add test for groq tool calling + streaming + combine chunks
Addresses https://github.com/BerriAI/litellm/issues/7621
* fix(streaming_utils.py): fix modelresponseiterator for openai like chunk parser
ensures chunk parser uses the correct tool call id when translating the chunk
Fixes https://github.com/BerriAI/litellm/issues/7621
* build(model_hub.tsx): display cost pricing on model hub
* build(model_hub.tsx): show cost per token pricing + complete model information
* fix(types/utils.py): fix usage object handling
* build(ui/): update ui
* fix: drop unsupported non-whitespace characters for real when calling… (#7484)
* fix: drop unsupported non-whitespace characters for real when calling anthropic with stop sequences
* test: add parameterized test for _map_stop_sequences method in AnthropicConfig
---------
Co-authored-by: Wolfram Ravenwolf <52386626+WolframRavenwolf@users.noreply.github.com>
* fix working build from pip
* add tests for proxy_build_from_pip_tests
* doc clean up for deployment
* docs cleanup
* docs build from pip
* fix cd docker/build_from_pip
* fix(custom_logger.py): expose new 'async_get_chat_completion_prompt' event hook
* fix(custom_logger.py): langfuse_prompt_management.py
remove 'headers' from custom logger 'async_get_chat_completion_prompt' and 'get_chat_completion_prompt' event hooks
* feat(router.py): expose new function for prompt management based routing
* feat(router.py): partial working router prompt factory logic
allows load balanced model to be used for model name w/ langfuse prompt management call
* feat(router.py): fix prompt management with load balanced model group
* feat(langfuse_prompt_management.py): support reading in openai params from langfuse
enables user to define optional params on langfuse vs. client code
* test(test_Router.py): add unit test for router based langfuse prompt management
* fix: fix linting errors
* feat(key_management_endpoints.py): allow deleting keys based on key alias
easier for proxy admin to delete known bad key
* fix(key_management_event_hooks.py): fix linting error
* docs(key_management_endpoints.py): document new key_aliases param
* fix(key_management_endpoints.py): return deleted keys to user
fixes return when passing key aliases
* feat: initial commit for new 'organizations' tab on ui
* build(ui/): create generic card for rendering complete org data table
can be reused in teams as well
simplifies things
* build(ui/): display created orgs on ui
* build(ui/): support adding orgs via UI
* build(ui/): add org in selection dropdown
* build(organizations.tsx): allow assigning org admins
* build(ui/): show org members on ui
* build(ui/): cleanup + show actual models on org dropdown
* build(ui/): explain user roles within organization
* feat(router.py): support request prioritization for text completion calls
* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`
Fixes https://github.com/BerriAI/litellm/issues/7485
* fix: fix linting errors
* fix: fix linting error
* test(test_router_helper_utils.py): add direct test for '_schedule_factory'
Fixes code qa test
- Ensured that `before` and `after` parameters are only passed when provided to avoid AttributeError.
- Implemented safe access using default values for `before` and `after` to prevent missing attribute issues.
- Added consistent handling of `order` and `limit` to improve flexibility and robustness in API calls.
* fix(redact_messages.py): fix redact messages for non-model response input to be dictionary
fixes issue with otel logging when message redaction is enabled
* fix(proxy_server.py): fix langfuse key leak in exception string
* test: fix test
* test: fix test
* test: fix tests
* fix(types/utils.py): support langfuse + humanloop routes on llm router
* fix(main.py): remove acompletion elif block
just await if coroutine returned
* refactor(prometheus.py): refactor to remove `_tag` metrics and incorporate in regular metrics
* fix(prometheus.py): handle label values not set in enum values
* feat(prometheus.py): working e2e custom metadata labels
* docs(prometheus.md): update docs to clarify how custom metrics would work
* test(test_prometheus_unit_tests.py): fix test
* test: add unit testing
* test(azure_openai_o1.py): initial commit with testing for azure openai o1 preview model
* fix(base_llm_unit_tests.py): handle azure o1 preview response format tests
skip as o1 on azure doesn't support tool calling yet
* fix: initial commit of azure o1 handler using openai caller
simplifies calling + allows fake streaming logic alr. implemented for openai to just work
* feat(azure/o1_handler.py): fake o1 streaming for azure o1 models
azure does not currently support streaming for o1
* feat(o1_transformation.py): support overriding 'should_fake_stream' on azure/o1 via 'supports_native_streaming' param on model info
enables user to toggle on when azure allows o1 streaming without needing to bump versions
* style(router.py): remove 'give feedback/get help' messaging when router is used
Prevents noisy messaging
Closes https://github.com/BerriAI/litellm/issues/5942
* fix(types/utils.py): handle none logprobs
Fixes https://github.com/BerriAI/litellm/issues/328
* fix(exception_mapping_utils.py): fix error str unbound error
* refactor(azure_ai/): move to openai_like chat completion handler
allows for easy swapping of api base url's (e.g. ai.services.com)
Fixes https://github.com/BerriAI/litellm/issues/7275
* refactor(azure_ai/): move to base llm http handler
* fix(azure_ai/): handle differing api endpoints
* fix(azure_ai/): make sure all unit tests are passing
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting error
* fix: fix linting errors
* fix(azure_ai/transformation.py): handle extra body param
* fix(azure_ai/transformation.py): fix max retries param handling
* fix: fix test
* test(test_azure_o1.py): fix test
* fix(llm_http_handler.py): support handling azure ai unprocessable entity error
* fix(llm_http_handler.py): handle sync invalid param error for azure ai
* fix(azure_ai/): streaming support with base_llm_http_handler
* fix(llm_http_handler.py): working sync stream calls with unprocessable entity handling for azure ai
* fix: fix linting errors
* fix(llm_http_handler.py): fix linting error
* fix(azure_ai/): handle cohere tool call invalid index param error
* fix(internal_user_endpoints.py): fix team list sort - handle team_alias being set + None
* fix(key_management_endpoints.py): allow team admin to create key for member via admin ui
Fixes https://github.com/BerriAI/litellm/issues/7482
* fix(proxy_server.py): allow querying info on specific model group via `/model_group/info`
allows client-side user to get model info from proxy
* fix(proxy_server.py): add docstring on `/model_group/info` showing how to filter by model name
* test(test_proxy_utils.py): add unit test for returning model group info filtered
* fix(proxy_server.py): fix query param
* fix(test_Get_model_info.py): handle no whitelisted bedrock modells
* fix(langfuse_prompt_management.py): migrate dynamic logging to langfuse custom logger compatible class
* fix(langfuse_prompt_management.py): support failure callback logging to langfuse as well
* feat(proxy_server.py): support setting custom tokenizer on config.yaml
Allows customizing value for `/utils/token_counter`
* fix(proxy_server.py): fix linting errors
* test: skip if file not found
* style: cleanup unused import
* docs(configs.md): add docs on setting custom tokenizer
* test(azure_openai_o1.py): initial commit with testing for azure openai o1 preview model
* fix(base_llm_unit_tests.py): handle azure o1 preview response format tests
skip as o1 on azure doesn't support tool calling yet
* fix: initial commit of azure o1 handler using openai caller
simplifies calling + allows fake streaming logic alr. implemented for openai to just work
* feat(azure/o1_handler.py): fake o1 streaming for azure o1 models
azure does not currently support streaming for o1
* feat(o1_transformation.py): support overriding 'should_fake_stream' on azure/o1 via 'supports_native_streaming' param on model info
enables user to toggle on when azure allows o1 streaming without needing to bump versions
* style(router.py): remove 'give feedback/get help' messaging when router is used
Prevents noisy messaging
Closes https://github.com/BerriAI/litellm/issues/5942
* test: fix azure o1 test
* test: fix tests
* fix: fix test
* docs(sidebar.js): docs for support model access groups for wildcard routes
* feat(key_management_endpoints.py): add check if user is premium_user when adding model access group for wildcard route
* refactor(docs/): make control model access a root-level doc in proxy sidebar
easier to discover how to control model access on litellm
* docs: more cleanup
* feat(fireworks_ai/): add document inlining support
Enables user to call non-vision models with images/pdfs/etc.
* test(test_fireworks_ai_translation.py): add unit testing for fireworks ai transform inline helper util
* docs(docs/): add document inlining details to fireworks ai docs
* feat(fireworks_ai/): allow user to dynamically disable auto add transform inline
allows client-side disabling of this feature for proxy users
* feat(fireworks_ai/): return 'supports_vision' and 'supports_pdf_input' true on all fireworks ai models
now true as fireworks ai supports document inlining
* test: fix tests
* fix(router.py): add unit testing for _is_model_access_group_for_wildcard_route
* fix(azure_ai/transformation.py): route ai.services.azure calls to the azure provider route
requires token to be passed in as 'api-key'
Closes https://github.com/BerriAI/litellm/issues/7275
* fix(key_management_endpoints.py): enforce user is member of team, if team_id set and team_id exists in team table
* fix(key_management_endpoints.py): handle assigned_user_id = none
* feat(create_key_button.tsx): allow assigning keys to other users
allows proxy admin to easily assign other people keys
* build(create_key_button.tsx): fix error message display
don't swallow the error message for key creation failure
* build(create_key_button.tsx): allow proxy admin to edit team id
* build(create_key_button.tsx): allow proxy admin to assign keys to other users
* build(edit_user.tsx): clarify how 'user budgets' are applied
* test: remove dup test
* fix(key_management_endpoints.py): don't raise error if team not in db
'
* test: fix test
* feat(main.py): mock_response() - support 'litellm.ContextWindowExceededError' in mock response
enabled quicker router/fallback/proxy debug on context window errors
* feat(exception_mapping_utils.py): extract special litellm errors from error str if calling `litellm_proxy/` as provider
Closes https://github.com/BerriAI/litellm/issues/7259
* fix(user_api_key_auth.py): specify 'Received Proxy Server Request' is span kind server
Closes https://github.com/BerriAI/litellm/issues/7298
* init commit ft jobs logging
* add ft logging
* add logging for FineTuningJob
* simple FT Job create test
* simplify Azure fine tuning to use all methods in OAI ft
* update doc string
* add aretrieve_fine_tuning_job
* re use from litellm.proxy.utils import handle_exception_on_proxy
* fix naming
* add /fine_tuning/jobs/{fine_tuning_job_id:path}
* remove unused imports
* update func signature
* run ci/cd again
* ci/cd run again
* fix code qulity
* ci/cd run again
* fix(model_dashboard.tsx): support setting model_info params - e.g. mode on ui
Closes https://github.com/BerriAI/litellm/issues/5270
* fix(lowest_tpm_rpm_v2.py): deployment rpm over limit check
fixes selection error when getting potential deployments below known tpm/rpm limit
Fixes https://github.com/BerriAI/litellm/issues/7395
* fix(test_tpm_rpm_routing_v2.py): add unit test for https://github.com/BerriAI/litellm/issues/7395
* fix(lowest_tpm_rpm_v2.py): fix tpm key name in dict post rpm update
* test: rename test to run earlier
* test: skip flaky test