* fix(main.py): fix retries being multiplied when using openai sdk
Closes https://github.com/BerriAI/litellm/pull/7130
* docs(prompt_management.md): add langfuse prompt management doc
* feat(team_endpoints.py): allow teams to add their own models
Enables teams to call their own finetuned models via the proxy
* test: add better enforcement check testing for `/model/new` now that teams can add their own models
* docs(team_model_add.md): tutorial for allowing teams to add their own models
* test: fix test
* fix(cost_calculator.py): move to using `.get_model_info()` for cost per token calculations
ensures cost tracking is reliable - handles edge cases of parsing model cost map
* build(model_prices_and_context_window.json): add 'supports_response_schema' for select tgai models
Fixes https://github.com/BerriAI/litellm/pull/7037#discussion_r1872157329
* build(model_prices_and_context_window.json): remove 'pdf input' and 'vision' support from nova micro in model map
Bedrock docs indicate no support for micro - https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html
* fix(converse_transformation.py): support amazon nova tool use
* fix(opentelemetry): Add missing LLM request type attribute to spans (#7041)
* feat(opentelemetry): add LLM request type attribute to spans
* lint
* fix: curl usage (#7038)
curl -d, --data <data> is lowercase d
curl -D, --dump-header <filename> is uppercase D
references:
https://curl.se/docs/manpage.html#-dhttps://curl.se/docs/manpage.html#-D
* fix(spend_tracking.py): handle empty 'id' in model response - when creating spend log
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(streaming_chunk_builder.py): handle initial id being empty string
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(anthropic_passthrough_logging_handler.py): add end user cost tracking for anthropic pass through endpoint
* docs(pass_through/): refactor docs location + add table on supported features for pass through endpoints
* feat(anthropic_passthrough_logging_handler.py): support end user cost tracking via anthropic sdk
* docs(anthropic_completion.md): add docs on passing end user param for cost tracking on anthropic sdk
* fix(litellm_logging.py): use standard logging payload if present in kwargs
prevent datadog logging error for pass through endpoints
* docs(bedrock.md): add rerank api usage example to docs
* bugfix/change dummy tool name format (#7053)
* fix viewing keys (#7042)
* ui new build
* build(model_prices_and_context_window.json): add bedrock region models to model cost map (#7044)
* bye (#6982)
* (fix) litellm router.aspeech (#6962)
* doc Migrating Databases
* fix aspeech on router
* test_audio_speech_router
* test_audio_speech_router
* docs show supported providers on batches api doc
* change dummy tool name format
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix: fix linting errors
* test: update test
* fix(litellm_logging.py): fix pass through check
* fix(test_otel_logging.py): fix test
* fix(cost_calculator.py): update handling for cost per second
* fix(cost_calculator.py): fix cost check
* test: fix test
* (fix) adding public routes when using custom header (#7045)
* get_api_key_from_custom_header
* add test_get_api_key_from_custom_header
* fix testing use 1 file for test user api key auth
* fix test user api key auth
* test_custom_api_key_header_name
* build: update ui build
---------
Co-authored-by: Doron Kopit <83537683+doronkopit5@users.noreply.github.com>
Co-authored-by: lloydchang <lloydchang@gmail.com>
Co-authored-by: hgulersen <haymigulersen@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(key_management_endpoints.py): override metadata field value on update
allow user to override tags
* feat(__init__.py): expose new disable_end_user_cost_tracking_prometheus_only metric
allow disabling end user cost tracking on prometheus - fixes cardinality issue
* fix(litellm_pre_call_utils.py): add key/team level enforced params
Fixes https://github.com/BerriAI/litellm/issues/6652
* fix(key_management_endpoints.py): allow user to pass in `enforced_params` as a top level param on /key/generate and /key/update
* docs(enterprise.md): add docs on enforcing required params for llm requests
* Add support of Galadriel API (#7005)
* fix(router.py): robust retry after handling
set retry after time to 0 if >0 healthy deployments. handle base case = 1 deployment
* test(test_router.py): fix test
* feat(bedrock/): add support for 'nova' models
also adds explicit 'converse/' route for simpler routing
* fix: fix 'supports_pdf_input'
return if model supports pdf input on get_model_info
* feat(converse_transformation.py): support bedrock pdf input
* docs(document_understanding.md): add document understanding to docs
* fix(litellm_pre_call_utils.py): fix linting error
* fix(init.py): fix passing of bedrock converse models
* feat(bedrock/converse): support 'response_format={"type": "json_object"}'
* fix(converse_handler.py): fix linting error
* fix(base_llm_unit_tests.py): fix test
* fix: fix test
* test: fix test
* test: fix test
* test: remove duplicate test
---------
Co-authored-by: h4n0 <4738254+h4n0@users.noreply.github.com>
* docs(config_settings.md): document all router_settings
* ci(config.yml): add router_settings doc test to ci/cd
* test: debug test on ci/cd
* test: debug ci/cd test
* test: fix test
* fix(team_endpoints.py): skip invalid team object. don't fail `/team/list` call
Causes downstream errors if ui just fails to load team list
* test(base_llm_unit_tests.py): add 'response_format={"type": "text"}' test to base_llm_unit_tests
adds complete coverage for all 'response_format' values to ci/cd
* feat(router.py): support wildcard routes in `get_router_model_info()`
Addresses https://github.com/BerriAI/litellm/issues/6914
* build(model_prices_and_context_window.json): add tpm/rpm limits for all gemini models
Allows for ratelimit tracking for gemini models even with wildcard routing enabled
Addresses https://github.com/BerriAI/litellm/issues/6914
* feat(router.py): add tpm/rpm tracking on success/failure to global_router
Addresses https://github.com/BerriAI/litellm/issues/6914
* feat(router.py): support wildcard routes on router.get_model_group_usage()
* fix(router.py): fix linting error
* fix(router.py): implement get_remaining_tokens_and_requests
Addresses https://github.com/BerriAI/litellm/issues/6914
* fix(router.py): fix linting errors
* test: fix test
* test: fix tests
* docs(config_settings.md): add missing dd env vars to docs
* fix(router.py): check if hidden params is dict
* feat(pass_through_endpoints/): support logging anthropic/gemini pass through calls to langfuse/s3/etc.
* fix(utils.py): allow disabling end user cost tracking with new param
Allows proxy admin to disable cost tracking for end user - keeps prometheus metrics small
* docs(configs.md): add disable_end_user_cost_tracking reference to docs
* feat(key_management_endpoints.py): add support for restricting access to `/key/generate` by team/proxy level role
Enables admin to restrict key creation, and assign team admins to handle distributing keys
* test(test_key_management.py): add unit testing for personal / team key restriction checks
* docs: add docs on restricting key creation
* docs(finetuned_models.md): add new guide on calling finetuned models
* docs(input.md): cleanup anthropic supported params
Closes https://github.com/BerriAI/litellm/issues/6856
* test(test_embedding.py): add test for passing extra headers via embedding
* feat(cohere/embed): pass client to async embedding
* feat(rerank.py): add `/v1/rerank` if missing for cohere base url
Closes https://github.com/BerriAI/litellm/issues/6844
* fix(main.py): pass extra_headers param to openai
Fixes https://github.com/BerriAI/litellm/issues/6836
* fix(litellm_logging.py): don't disable global callbacks when dynamic callbacks are set
Fixes issue where global callbacks - e.g. prometheus were overriden when langfuse was set dynamically
* fix(handler.py): fix linting error
* fix: fix typing
* build: add conftest to proxy_admin_ui_tests/
* test: fix test
* fix: fix linting errors
* test: fix test
* fix: fix pass through testing
* Fix Vertex AI function calling invoke: use JSON format instead of protobuf text format. (#6702)
* test: test tool_call conversion when arguments is empty dict
Fixes https://github.com/BerriAI/litellm/issues/6833
* fix(openai_like/handler.py): return more descriptive error message
Fixes https://github.com/BerriAI/litellm/issues/6812
* test: skip overloaded model
* docs(anthropic.md): update anthropic docs to show how to route to any new model
* feat(groq/): fake stream when 'response_format' param is passed
Groq doesn't support streaming when response_format is set
* feat(groq/): add response_format support for groq
Closes https://github.com/BerriAI/litellm/issues/6845
* fix(o1_handler.py): remove fake streaming for o1
Closes https://github.com/BerriAI/litellm/issues/6801
* build(model_prices_and_context_window.json): add groq llama3.2b model pricing
Closes https://github.com/BerriAI/litellm/issues/6807
* fix(utils.py): fix handling ollama response format param
Fixes https://github.com/BerriAI/litellm/issues/6848#issuecomment-2491215485
* docs(sidebars.js): refactor chat endpoint placement
* fix: fix linting errors
* test: fix test
* test: fix test
* fix(openai_like/handler): handle max retries
* fix(streaming_handler.py): fix streaming check for openai-compatible providers
* test: update test
* test: correctly handle model is overloaded error
* test: update test
* test: fix test
* test: mark flaky test
---------
Co-authored-by: Guowang Li <Guowang@users.noreply.github.com>
* track LiteLLM_OrganizationMembership
* add add_internal_user_to_organization
* add org membership to schema
* read organization membership when reading user info in auth checks
* add check for valid organization_id
* add test for test_create_new_user_in_organization
* test test_create_new_user_in_organization
* add new ADMIN role
* add test for org admins creating teams
* add test for test_org_admin_create_user_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* fix organization_role_based_access_check
* fix getting user members
* fix TeamBase
* fix types used for use role
* fix type checks
* sync prisma schema
* docs - organization admins
* fix use organization_endpoints for /organization management
* add types for org member endpoints
* fix role name for org admin
* add type for member add response
* add organization/member_add
* add error handling for adding members to an org
* add nice doc string for oranization/member_add
* fix test_create_new_user_in_organization
* linting fix
* use simple route changes
* fix types
* add organization member roles
* add org admin auth checks
* add auth checks for orgs
* test for creating teams as org admin
* simplify org id usage
* fix typo
* test test_org_admin_create_user_team_wrong_org_permissions
* fix type check issue
* code quality fix
* fix schema.prisma
* add check deployment_is_active_for_environment
* add test for test_init_router_with_supported_environments
* show good example config for environments
* docs clean up config.yaml
* docs cleanup
* docs configs
* docs specfic env
* fix(proxy_server.py): use default azure credentials to support azure non-client secret kms
* fix(langsmith.py): raise error if credentials missing
* feat(langsmith.py): support error logging for langsmith + standard logging payload
Fixes https://github.com/BerriAI/litellm/issues/5738
* Fix hardcoding of schema in view check (#5749)
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* test(test_router_debug_logs.py): move to mock response
* Fix hardcoding of schema
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* fix(proxy_server.py): allow admin to disable ui via `DISABLE_ADMIN_UI` flag
* fix(router.py): fix default model name value
Fixes 55db19a1e4 (r1763712148)
* fix(utils.py): fix unbound variable error
* feat(rerank/main.py): add azure ai rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/5667
* feat(secret_detection.py): Allow configuring secret detection params
Allows admin to control what plugins to run for secret detection. Prevents overzealous secret detection.
* docs(secret_detection.md): add secret detection guardrail docs
* fix: fix linting errors
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* Litellm fix router testing (#5748)
* test: fix testing - azure changed content policy error logic
* test: fix tests to use mock responses
* test(test_image_generation.py): handle api instability
* test(test_image_generation.py): handle azure api instability
* fix(utils.py): fix unbounded variable error
* fix(utils.py): fix unbounded variable error
* test: refactor test to use mock response
* test: mark flaky azure tests
* Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard (#5753)
Bumps [next](https://github.com/vercel/next.js) from 14.1.1 to 14.2.10.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.1.1...v14.2.10)
---
updated-dependencies:
- dependency-name: next
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* [Fix] o1-mini causes pydantic warnings on `reasoning_tokens` (#5754)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details
* [Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog (#5750)
* dd - start tracking redis status on dd
* add async_service_succes_hook / failure hook in custom logger
* add async_service_failure_hook
* log service failures on dd
* fix import error
* add test for redis errors / warning
* [Fix] Router/ Proxy - Tag Based routing, raise correct error when no deployments found and tag filtering is on (#5745)
* fix tag routing - raise correct error when no model with tag based routing
* fix error string from tag based routing
* test router tag based routing
* raise 401 error when no tags avialable for deploymen
* linting fix
* [Feat] Log Request metadata on gcs bucket logging (#5743)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* fix(litellm_logging.py): fix logging message
* fix(rerank_api/main.py): fix linting errors
* fix(custom_guardrails.py): maintain backwards compatibility for older guardrails
* fix(rerank_api/main.py): fix cost tracking for rerank endpoints
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: steffen-sbt <148480574+steffen-sbt@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>