* feat(proxy/utils.py): get associated litellm budget from db in combined_view for key
allows user to create rate limit tiers and associate those to keys
* feat(proxy/_types.py): update the value of key-level tpm/rpm/model max budget metrics with the associated budget table values if set
allows rate limit tiers to be easily applied to keys
* docs(rate_limit_tiers.md): add doc on setting rate limit / budget tiers
make feature discoverable
* feat(key_management_endpoints.py): return litellm_budget_table value in key generate
make it easy for user to know associated budget on key creation
* fix(key_management_endpoints.py): document 'budget_id' param in `/key/generate`
* docs(key_management_endpoints.py): document budget_id usage
* refactor(budget_management_endpoints.py): refactor budget endpoints into separate file - makes it easier to run documentation testing against it
* docs(test_api_docs.py): add budget endpoints to ci/cd doc test + add missing param info to docs
* fix(customer_endpoints.py): use new pydantic obj name
* docs(user_management_heirarchy.md): add simple doc explaining teams/keys/org/users on litellm
* Litellm dev 12 26 2024 p2 (#7432)
* (Feat) Add logging for `POST v1/fine_tuning/jobs` (#7426)
* init commit ft jobs logging
* add ft logging
* add logging for FineTuningJob
* simple FT Job create test
* (docs) - show all supported Azure OpenAI endpoints in overview (#7428)
* azure batches
* update doc
* docs azure endpoints
* docs endpoints on azure
* docs azure batches api
* docs azure batches api
* fix(key_management_endpoints.py): fix key update to actually work
* test(test_key_management.py): add e2e test asserting ui key update call works
* fix: proxy/_types - fix linting erros
* test: update test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix: test
* fix(parallel_request_limiter.py): enforce tpm/rpm limits on key from tiers
* fix: fix linting errors
* test: fix test
* fix: remove unused import
* test: update test
* docs(customer_endpoints.py): document new model_max_budget param
* test: specify unique key alias
* docs(budget_management_endpoints.py): document new model_max_budget param
* test: fix test
* test: fix tests
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* build(model_prices_and_context_window.json): add gemini-1.5-flash context caching
* fix(context_caching/transformation.py): just use last identified cache point
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(context_caching/transformation.py): pick first contiguous block - handles system message error from google
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(vertex_ai/gemini/): track context caching tokens
* refactor(gemini/): place transformation.py inside `chat/` folder
make it easy for user to know we support the equivalent endpoint
* fix: fix import
* refactor(vertex_ai/): move vertex_ai cost calc inside vertex_ai/ folder
make it easier to see cost calculation logic
* fix: fix linting errors
* fix: fix circular import
* feat(gemini/cost_calculator.py): support gemini context caching cost calculation
generifies anthropic's cost calculation function and uses it across anthropic + gemini
* build(model_prices_and_context_window.json): add cost tracking for gemini-1.5-flash-002 w/ context caching
Closes https://github.com/BerriAI/litellm/issues/6891
* docs(gemini.md): add gemini context caching architecture diagram
make it easier for user to understand how context caching works
* docs(gemini.md): link to relevant gemini context caching code
* docs(gemini/context_caching): add readme in github, make it easy for dev to know context caching is supported + where to go for code
* fix(llm_cost_calc/utils.py): handle gemini 128k token diff cost calc scenario
* fix(deepseek/cost_calculator.py): support deepseek context caching cost calculation
* test: fix test
* fix(main.py): fix retries being multiplied when using openai sdk
Closes https://github.com/BerriAI/litellm/pull/7130
* docs(prompt_management.md): add langfuse prompt management doc
* feat(team_endpoints.py): allow teams to add their own models
Enables teams to call their own finetuned models via the proxy
* test: add better enforcement check testing for `/model/new` now that teams can add their own models
* docs(team_model_add.md): tutorial for allowing teams to add their own models
* test: fix test
* feat(customer_endpoints.py): support passing budget duration via `/customer/new` endpoint
Closes https://github.com/BerriAI/litellm/issues/5651
* docs: add missing params to swagger + api documentation test
* docs: add documentation for all key endpoints
documents all params on swagger
* docs(internal_user_endpoints.py): document all /user/new params
Ensures all params are documented
* docs(team_endpoints.py): add missing documentation for team endpoints
Ensures 100% param documentation on swagger
* docs(organization_endpoints.py): document all org params
Adds documentation for all params in org endpoint
* docs(customer_endpoints.py): add coverage for all params on /customer endpoints
ensures all /customer/* params are documented
* ci(config.yml): add endpoint doc testing to ci/cd
* fix: fix internal_user_endpoints.py
* fix(internal_user_endpoints.py): support 'duration' param
* fix(partner_models/main.py): fix anthropic re-raise exception on vertex
* fix: fix pydantic obj
* build(model_prices_and_context_window.json): add new vertex claude model names
vertex claude changed model names - causes cost tracking errors
* fix(proxy_server.py): use default azure credentials to support azure non-client secret kms
* fix(langsmith.py): raise error if credentials missing
* feat(langsmith.py): support error logging for langsmith + standard logging payload
Fixes https://github.com/BerriAI/litellm/issues/5738
* Fix hardcoding of schema in view check (#5749)
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* test(test_router_debug_logs.py): move to mock response
* Fix hardcoding of schema
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* fix(proxy_server.py): allow admin to disable ui via `DISABLE_ADMIN_UI` flag
* fix(router.py): fix default model name value
Fixes 55db19a1e4 (r1763712148)
* fix(utils.py): fix unbound variable error
* feat(rerank/main.py): add azure ai rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/5667
* feat(secret_detection.py): Allow configuring secret detection params
Allows admin to control what plugins to run for secret detection. Prevents overzealous secret detection.
* docs(secret_detection.md): add secret detection guardrail docs
* fix: fix linting errors
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* Litellm fix router testing (#5748)
* test: fix testing - azure changed content policy error logic
* test: fix tests to use mock responses
* test(test_image_generation.py): handle api instability
* test(test_image_generation.py): handle azure api instability
* fix(utils.py): fix unbounded variable error
* fix(utils.py): fix unbounded variable error
* test: refactor test to use mock response
* test: mark flaky azure tests
* Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard (#5753)
Bumps [next](https://github.com/vercel/next.js) from 14.1.1 to 14.2.10.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.1.1...v14.2.10)
---
updated-dependencies:
- dependency-name: next
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* [Fix] o1-mini causes pydantic warnings on `reasoning_tokens` (#5754)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details
* [Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog (#5750)
* dd - start tracking redis status on dd
* add async_service_succes_hook / failure hook in custom logger
* add async_service_failure_hook
* log service failures on dd
* fix import error
* add test for redis errors / warning
* [Fix] Router/ Proxy - Tag Based routing, raise correct error when no deployments found and tag filtering is on (#5745)
* fix tag routing - raise correct error when no model with tag based routing
* fix error string from tag based routing
* test router tag based routing
* raise 401 error when no tags avialable for deploymen
* linting fix
* [Feat] Log Request metadata on gcs bucket logging (#5743)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* fix(litellm_logging.py): fix logging message
* fix(rerank_api/main.py): fix linting errors
* fix(custom_guardrails.py): maintain backwards compatibility for older guardrails
* fix(rerank_api/main.py): fix cost tracking for rerank endpoints
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: steffen-sbt <148480574+steffen-sbt@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>