Commit graph

4127 commits

Author SHA1 Message Date
Krish Dholakia
fe460f19f5 Add datadog health check support + fix bedrock converse cost tracking w/ region name specified (#7958)
* fix(bedrock/converse_handler.py): fix bedrock region name on async calls

* fix(utils.py): fix split model handling

Fixes bedrock cost calculation when region name is given

* feat(_health_endpoints.py): support health checking datadog integration

Closes https://github.com/BerriAI/litellm/issues/7921
2025-01-23 22:17:09 -08:00
Ishaan Jaff
c0e83ab377 ui new build - 2025-01-23 21:11:23 -08:00
Krish Dholakia
7cced62815 Litellm dev 01 23 2025 p2 (#7962)
* fix(ui/): revert user team key view

* fix(view_key_table.tsx): fix default team view - show all personal keys

* fix(navbar.tsx): fix custom logo

Fixes https://github.com/BerriAI/litellm/issues/7895

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2025-01-23 21:02:15 -08:00
Ishaan Jaff
40edee3fd6 fix LiteLLM_ManagementEndpoint_MetadataFields 2025-01-23 20:59:38 -08:00
Ishaan Jaff
36a7954045 (UI) Set guardrails on Team Create and Edit page (#7963)
* team edit guardrail

* LiteLLM_TeamTable
2025-01-23 20:57:34 -08:00
Ishaan Jaff
7564190036 (Feat) allow setting guardrails on a team on the API (#7959)
* allow setting guardrails on a team

* test set guardrails on team

* set guardrails on a team

* fix LiteLLM_ManagementEndpoint_MetadataFields_Premium
2025-01-23 20:26:51 -08:00
Ishaan Jaff
fda318ea30 (Feat) - emit litellm_team_budget_reset_at_metric and litellm_api_key_budget_remaining_hours_metric on prometheus (#7946)
* set litellm_team_budget_reset_at_metric

* add _get_team_info_from_db_lru_cached

* _set_team_budget_metrics

* e2e test_team_budget_metrics

* update doc string

* add _get_remaining_hours_for_budget_reset

* fix team endpoints

* _get_remaining_hours_for_budget_reset

* _set_key_budget_metrics  on startup

* test_key_budget_metrics

* prom fixes for emitting key / team metrics

* fix _set_api_key_budget_metrics_after_api_request

* test_increment_remaining_budget_metrics

* unit test test_increment_remaining_budget_metrics

* test_initialize_remaining_budget_metrics
2025-01-23 18:12:47 -08:00
Ishaan Jaff
5d58000cd3 fix code quality check 2025-01-23 18:06:10 -08:00
Ishaan Jaff
607fe8ac63 (UI) - Set/edit guardrails on a virtual key (#7954)
* Revert "JWT Auth - `enforce_rbac` support + UI team view, spend calc fix (#7863)"

This reverts commit dca6904937.

* Revert "Litellm dev 01 10 2025 p2 (#7679)"

This reverts commit c4780479a9.

* ui - allow setting guardrails on a key

* working edit guardrails

* fix edit guardrails on a key

* Revert "Revert "JWT Auth - `enforce_rbac` support + UI team view, spend calc fix (#7863)""

This reverts commit 8f7b9ae1af.

* Revert "Revert "Litellm dev 01 10 2025 p2 (#7679)""

This reverts commit a609139dde.

* fix edit guardrail on ui

* fix list_guardrails
2025-01-23 18:01:54 -08:00
Ishaan Jaff
360322cf5e ui_view_spend_logs (#7952) 2025-01-23 17:27:01 -08:00
Ishaan Jaff
e0fb7eb4f7 (Testing + Refactor) - Unit testing for team and virtual key budget checks (#7945)
* unit testing for test_virtual_key_max_budget_check

* refactor _team_max_budget_check

* is_model_allowed_by_pattern
2025-01-23 16:58:16 -08:00
Krish Dholakia
b286bab075 Add attempted-retries and timeout values to response headers + more testing (#7926)
* feat(router.py): add retry headers to response

makes it easy to add testing to ensure model-specific retries are respected

* fix(add_retry_headers.py): clarify attempted retries vs. max retries

* test(test_fallbacks.py): add test for checking if max retries set for model is respected

* test(test_fallbacks.py): assert values for attempted retries and max retries are as expected

* fix(utils.py): return timeout in litellm proxy response headers

* test(test_fallbacks.py): add test to assert model specific timeout used on timeout error

* test: add bad model with timeout to proxy

* fix: fix linting error

* fix(router.py): fix get model list from model alias

* test: loosen test restriction - account for other events on proxy
2025-01-22 22:19:44 -08:00
Krish Dholakia
bf1639cb92 Litellm dev 01 22 2025 p4 (#7932)
* feat(main.py): add new 'provider_specific_header' param

allows passing extra header for specific provider

* fix(litellm_pre_call_utils.py): add unit test for pre call utils

* test(test_bedrock_completion.py): skip test now that bedrock supports this
2025-01-22 21:52:07 -08:00
Krish Dholakia
65ca5f74b0 fix(utils.py): move adding custom logger callback to success event in… (#7905)
* fix(utils.py): move adding custom logger callback to success event into separate function + don't add success callback to failure event

if user is explicitly choosing 'success' callback, don't log failure as well

* test(test_utils.py): add unit test to ensure custom logger callback only adds callback to specific event

* fix(utils.py): remove string from list of callbacks once corresponding callback class is added

prevents floating values - simplifies testing

* fix(utils.py): fix linting error

* test: cleanup args before test

* test: fix test

* test: update test

* test: fix test
2025-01-22 21:49:09 -08:00
Krish Dholakia
4d89da9c97 Deepseek r1 support + watsonx qa improvements (#7907)
* fix(types/utils.py): support returning 'reasoning_content' for deepseek models

Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218

* fix(convert_dict_to_response.py): return deepseek response in provider_specific_field

allows for separating openai vs. non-openai params in model response

* fix(utils.py): support 'provider_specific_field' in delta chunk as well

allows deepseek reasoning content chunk to be returned to user from stream as well

Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218

* fix(watsonx/chat/handler.py): fix passing space id to watsonx on chat route

* fix(watsonx/): fix watsonx_text/ route with space id

* fix(watsonx/): qa item - also adds better unit testing for watsonx embedding calls

* fix(utils.py): rename to '..fields'

* fix: fix linting errors

* fix(utils.py): fix typing - don't show provider-specific field if none or empty - prevents default respons
e from being non-oai compatible

* fix: cleanup unused imports

* docs(deepseek.md): add docs for deepseek reasoning model
2025-01-21 23:13:15 -08:00
Ishaan Jaff
d1f86ad111 (Feat - prometheus) - emit litellm_overhead_latency_metric (#7913)
* add track_llm_api_timing

* add track_llm_api_timing

* test_litellm_overhead

* use ResponseMetadata class for setting hidden params and response overhead

* instrument http handler

* fix track_llm_api_timing

* track_llm_api_timing

* emit response overhead on hidden params

* fix resp metadata

* fix make_sync_openai_embedding_request

* test_aaaaatext_completion_endpoint fixes

* _get_value_from_hidden_params

* set_hidden_params

* test_litellm_overhead

* test_litellm_overhead

* test_litellm_overhead

* fix import

* test_litellm_overhead_stream

* add LiteLLMLoggingObject

* use diff folder for testing

* use diff folder for overhead testing

* test litellm overhead

* use typing

* clear typing

* test_litellm_overhead

* fix async_streaming

* update_response_metadata

* move test file

* emit litellm_overhead_latency_metric on prometheus

* add prometheus callback

* litellm_overhead_latency_metric_bucket

* fix apply hidden params

* fix StandardLoggingHiddenParams
2025-01-21 20:36:30 -08:00
Krish Dholakia
dec558ba4c Litellm dev 01 21 2025 p1 (#7898)
* fix(utils.py): don't pass 'anthropic-beta' header to vertex - will cause request to fail

* fix(utils.py): add flag to allow user to disable filtering invalid headers

ensure user can control behaviour

* style(utils.py): cleanup message

* test(test_utils.py): add unit test to cover invalid header filtering

* fix(proxy_server.py): fix custom openapi schema generation

* fix(utils.py): pass extra headers if set

* fix(main.py): fix image variation to use 'client' param
2025-01-21 20:36:11 -08:00
Ishaan Jaff
359a4ee3a9 (Feat) Add x-litellm-overhead-duration-ms and "x-litellm-response-duration-ms" in response from LiteLLM (#7899)
* add track_llm_api_timing

* add track_llm_api_timing

* test_litellm_overhead

* use ResponseMetadata class for setting hidden params and response overhead

* instrument http handler

* fix track_llm_api_timing

* track_llm_api_timing

* emit response overhead on hidden params

* fix resp metadata

* fix make_sync_openai_embedding_request

* test_aaaaatext_completion_endpoint fixes

* _get_value_from_hidden_params

* set_hidden_params

* test_litellm_overhead

* test_litellm_overhead

* test_litellm_overhead

* fix import

* test_litellm_overhead_stream

* add LiteLLMLoggingObject

* use diff folder for testing

* use diff folder for overhead testing

* test litellm overhead

* use typing

* clear typing

* test_litellm_overhead

* fix async_streaming

* update_response_metadata

* move test file

* pply metadata to the response objec
2025-01-21 20:27:55 -08:00
Ishaan Jaff
b056bc0ac3 (Bug fix) - Allow setting null for max_budget, rpm_limit, tpm_limit when updating values on a team (#7912)
* fix update_team

* fix test_key_limit_modifications
2025-01-21 19:19:36 -08:00
Krish Dholakia
08244aca0e fix(proxy_server.py): fix get model info when litellm_model_id is set + move model analytics to free (#7886)
* fix(proxy_server.py): fix get model info when litellm_model_id is set

Fixes https://github.com/BerriAI/litellm/issues/7873

* test(test_models.py): add test to ensure get model info on specific deployment has same value as all model info

Fixes https://github.com/BerriAI/litellm/issues/7873

* fix(usage.tsx): make model analytics free

Fixes @iqballx's feedback

* fix(fix(invoke_handler.py):-fix-bedrock-error-chunk-parsing): return correct bedrock status code and error message if chunk in stream

Improves bedrock stream error handling

* fix(proxy_server.py): fix linting errors

* test(test_auth_checks.py): remove redundant test

* fix(proxy_server.py): fix linting errors

* test: fix flaky test

* test: fix test
2025-01-21 08:19:07 -08:00
Ishaan Jaff
72692f2d77 (e2e testing + minor refactor) - Virtual Key Max budget check (#7888)
* use helper _virtual_key_max_budget_check

* e2e testing for budget exceeded errors

* e2e budget testing

* test_chat_completion_budget_update

* test_chat_completion_high_budget
2025-01-21 06:47:26 -08:00
Krish Dholakia
4c1d4acabc Litellm dev 01 20 2025 p1 (#7884)
* fix(initial-test-to-return-api-timeout-value-in-openai-timeout-exception): Makes it easier for user to debug why request timed out

* feat(openai.py): return timeout value + time taken on openai timeout errors

helps debug timeout errors

* fix(utils.py): fix num retries extraction logic when num_retries = 0

* fix(config_settings.md): litellm_logging.py

support printing payload to console if 'LITELLM_PRINT_STANDARD_LOGGING_PAYLOAD' is true

 Enables easier debug

* test(test_auth_checks.py'): remove common checks userapikeyauth enforcement check

* fix(litellm_logging.py): fix linting error
2025-01-20 21:45:48 -08:00
Ishaan Jaff
a4d3276bed (Feat) datadog_llm_observability callback - emit request_tags on logs (#7883)
* dd - emit tags on llm obs payload

* dd  - show requester tags on traces

* test_get_datadog_tags

* _get_datadog_tags

* fix dd POD_NAME

* test_get_datadog_tags
2025-01-20 20:36:27 -08:00
Krish Dholakia
f558f6ce7c JWT Auth - enforce_rbac support + UI team view, spend calc fix (#7863)
* fix(user_dashboard.tsx): fix spend calculation when team selected

sum all team keys, not user keys

* docs(admin_ui_sso.md): fix docs tabbing

* feat(user_api_key_auth.py): introduce new 'enforce_rbac' param on jwt auth

allows proxy admin to prevent any unmapped yet authenticated jwt tokens from calling proxy

Fixes https://github.com/BerriAI/litellm/issues/6793

* test: more unit testing + refactoring

* fix: fix returning id when obj not found in db

* fix(user_api_key_auth.py): add end user id tracking from jwt auth

* docs(token_auth.md): add doc on rbac with JWTs

* fix: fix unused params

* test: remove old test
2025-01-19 21:28:55 -08:00
Krish Dholakia
bdf8421386 Auth checks on invalid fallback models (#7871)
* fix(user_api_key_auth.py): handle clientside fallback model when item in list is dictionary

* fix(auth_checks.py): help user find invalid model names during dev

Ensure fallbacks work in prod

* fix(user_api_key_auth.py): fix linting check

* fix: cleanup unused variables

* fix: fix import

* fix(auth_checks.py): fix auth check
2025-01-19 21:28:10 -08:00
Krish Dholakia
e3e1fe59da feat(health_check.py): set upperbound for api when making health check call (#7865)
* feat(health_check.py): set upperbound for api when making health check call

prevent bad model from health check to hang and cause pod restarts

* fix(health_check.py): cleanup task once completed

* fix(constants.py): bump default health check timeout to 1min

* docs(health.md): add 'health_check_timeout' to health docs on litellm

* build(proxy_server_config.yaml): add bad model to health check
2025-01-18 19:47:43 -08:00
Ishaan Jaff
19d979efc9 ui new build 2025-01-18 12:56:31 -08:00
Ishaan Jaff
8e92cd48a2 (UI Logs) - add pagination + filtering by key name/team name (#7860)
* fix remove emoji on logs page

* fix title of page

* ui - get countryIP

* ui lookup

* ui - get country from ip address

* show team and key alias on root

* working team / key filter

* working filters

* ui filtering by key / team alias

* simple search

* fix add pagination on view logs page

* add start / end time filters

* add custom time filter
2025-01-18 12:47:01 -08:00
Krrish Dholakia
34f406fd9e build(ui/): update ui 2025-01-18 08:42:51 -08:00
Krish Dholakia
762d872b7e fix(admins.tsx): fix logic for getting base url and create common get base url component (#7854)
Resolves https://github.com/BerriAI/litellm/issues/7761
2025-01-18 08:07:39 -08:00
Krish Dholakia
9bc4001c51 LiteLLM Minor Fixes & Improvements (2024/16/01) (#7826)
* fix(lm_studio/chat/transformation.py): Fix https://github.com/BerriAI/litellm/issues/7811

* fix(router.py): fix mock timeout check

* fix: drop model name from fallback args since it causes a conflict with the model=model that is provided later on. (#7806)

This error happens if you provide multiple fallback models to the completion function with model name defined in each one.

* fix(router.py): remove mock_timeout before sending to request

prevents reuse in fallbacks

* test: update test

* test: revert test change - wrong pr

---------

Co-authored-by: Dudu Lasry <david1542@users.noreply.github.com>
2025-01-17 20:59:21 -08:00
Krish Dholakia
1ed7946280 /key/delete - allow team admin to delete team keys (#7846)
* fix(key_management_endpoints.py): fix key delete to allow team admins + other proxy admins to delete keys

Fixes https://github.com/BerriAI/litellm/issues/7760

* fix(key_management_endpoints.py): remove unused variables

* fix(key_management_endpoints.py): fix linting error
2025-01-17 20:16:12 -08:00
Krish Dholakia
2b58f16fda refactor: make bedrock image transformation requests async (#7840)
* refactor: initial commit for using separate sync vs. async transformation routes for bedrock

ensures no blocking calls e.g. when converting image url to b64

* perf(converse_transformation.py): make bedrock converse transformation async

asyncify's the bedrock message transformation - useful for handling image urls for bedrock

* fix(converse_handler.py): fix logging for async streaming

* style: cleanup unused imports
2025-01-17 20:14:15 -08:00
Ishaan Jaff
32c8933935 ui - new build 2025-01-17 20:07:06 -08:00
Krish Dholakia
6eb2346fd6 QA: ensure all bedrock regional models have same supported_ as base + Anthropic nested pydantic object support (#7844)
* build: ensure all regional bedrock models have same supported values as base bedrock model

prevents drift

* test(base_llm_unit_tests.py): add testing for nested pydantic objects

* fix(test_utils.py): add test_get_potential_model_names

* fix(anthropic/chat/transformation.py): support nested pydantic objects

Fixes https://github.com/BerriAI/litellm/issues/7755
2025-01-17 19:49:12 -08:00
Ishaan Jaff
c8d6254b78 ui new build 2025-01-17 19:23:41 -08:00
Ishaan Jaff
d4d6498e14 ui new build 2025-01-17 19:14:44 -08:00
Ishaan Jaff
16673ab488 (UI - View SpendLogs Table) (#7842)
* litellm log messages / responses

* add messages/response to schema.prisma

* add support for logging messages / responses in DB

* test_spend_logs_payload_with_prompts_enabled

* _get_messages_for_spend_logs_payload

* ui_view_spend_logs endpoint

* add tanstack and moment

* add uiSpendLogsCall

* ui view logs table

* ui view spendLogs table

* ui_view_spend_logs

* fix code quality

* test_spend_logs_payload_with_prompts_enabled

* _get_messages_for_spend_logs_payload

* test_spend_logs_payload_with_prompts_enabled

* test_spend_logs_payload_with_prompts_enabled

* ui view spend logs

* minor ui fix

* ui - update leftnav

* ui - clean up ui

* fix leftnav

* ui fix navbar

* ui fix moving chat ui tab
2025-01-17 18:53:45 -08:00
Krish Dholakia
fc7a931485 fix(key_management_endpoints.py): fix default allowed team member roles (#7843)
admin and user, not admin and member
2025-01-17 17:15:22 -08:00
Ishaan Jaff
b4735fbbc0 (Fix + Testing) - Add dd-trace-run to litellm ci/cd pipeline + fix bug caused by dd-trace patching OpenAI sdk (#7820)
* add dd trace to e2e docker run tests

* update dd trace v

* fix entrypoint

* dd trace fixes

* proxy_build_from_pip_tests

* build python3.13

* use py 3.13

* fix build from pip

* dd trace fix

* proxy_build_from_pip_tests

* bump build from pip
2025-01-16 22:03:09 -08:00
Ishaan Jaff
2177bdc836 (datadog llm observability) - fixes + improvements for using datadog llm observability logging integration (#7824)
* dd llm obs fixes

* _ensure_string_content

* fix _get_dd_llm_obs_payload_metadata
2025-01-16 22:02:24 -08:00
Krish Dholakia
f37dc43e92 test: initial commit enforcing testing on all anthropic pass through … (#7794)
* test: initial commit enforcing testing on all anthropic pass through functions

prevents future regressions

* test(test_unit_test_anthropic_pass_through.py): add unit test for '_get_user_from_metadata' function

* test(test_unit_test_anthropic_passthrough.py): add unit test for handle_logging_anthropic_collected_chunks

* test(test_unit_test_anthropic_pass_through): add coverage for all anthropic pass through functions
2025-01-15 22:02:35 -08:00
Krish Dholakia
fbdd88d79c test: initial test to enforce all functions in user_api_key_auth.py h… (#7797)
* test: initial test to enforce all functions in user_api_key_auth.py have direct testing

* test(test_user_api_key_auth.py): add is_allowed_route unit test

* test(test_user_api_key_auth.py): add more tests

* test(test_user_api_key_auth.py): add complete testing coverage for all functions in `user_api_key_auth.py`

* test(test_db_schema_changes.py): add a unit test to ensure all db schema changes are backwards compatible

gives user an easy rollback path

* test: fix schema compatibility test filepath

* test: fix test
2025-01-15 21:52:45 -08:00
Krish Dholakia
543655adc7 Litellm dev 01 14 2025 p2 (#7772)
* feat(pass_through_endpoints.py): fix anthropic end user cost tracking

* fix(anthropic/chat/transformation.py): use returned provider model for anthropic

handles anthropic `-latest` tag in request body throwing cost calculation errors

ensures we can be accurate in our model cost tracking

* feat(model_prices_and_context_window.json): add gemini-2.0-flash-thinking-exp pricing

* test: update test to use assumption that user_api_key_dict can get anthropic user id

* test: fix test

* fix: fix test

* fix(anthropic_pass_through.py): uncomment previous anthropic end-user cost tracking code block

can't guarantee user api key dict always has end user id - too many code paths

* fix(user_api_key_auth.py): this allows end user id from request body to always be read and set in auth object

* fix(auth_check.py): fix linting error

* test: fix auth check

* fix(auth_utils.py): fix get end user id to handle metadata = None
2025-01-15 21:34:50 -08:00
Krish Dholakia
26c1b86f4e Litellm dev 01 2025 p4 (#7776)
* fix(gemini/): support gemini 'frequency_penalty' and 'presence_penalty'

Closes https://github.com/BerriAI/litellm/issues/7748

* feat(proxy_server.py): new env var to disable prisma health check on startup

* test: fix test
2025-01-14 21:49:25 -08:00
Krish Dholakia
142662a504 build(pyproject.toml): bump uvicorn depedency requirement (#7773)
* build(pyproject.toml): bump uvicorn depedency requirement

Fixes https://github.com/BerriAI/litellm/issues/7768

* fix(anthropic/chat/transformation.py): fix is_vertex_request check to actually use optional param passed in

Fixes https://github.com/BerriAI/litellm/issues/6898#issuecomment-2590860695

* fix(o1_transformation.py): fix azure o1 'is_o1_model' check to just check for o1 in model string

https://github.com/BerriAI/litellm/issues/7743

* test: load vertex creds
2025-01-14 21:47:11 -08:00
Ishaan Jaff
6a4e8c33b3 (fix) BaseAWSLLM - cache IAM role credentials when used (#7775)
* fix base aws llm

* fix auth with aws role

* test aws base llm

* fix base aws llm init

* run ci/cd again

* fix get_credentials

* ci/cd run again

* _auth_with_aws_role
2025-01-14 20:16:22 -08:00
Ishaan Jaff
25ae1e9117 (Feat) prometheus - emit remaining team budget metric on proxy startup (#7777)
* fix get_paginated_teams

* use _initialize_remaining_budget_metrics

* fix prom metric

* run ci/cd again

* fix run async func

* fix _initialize_prometheus_startup_metrics

* fix _initialize_prometheus_startup_metrics

* prom unit tests

* test_get_paginated_teams
2025-01-14 20:08:23 -08:00
Krish Dholakia
178cfe3c57 Litellm dev 01 13 2025 p2 (#7758)
* fix(factory.py): fix bedrock document url check

Make check more generic - if starts with 'text' or 'application' assume it's a document and let it go through

 Fixes https://github.com/BerriAI/litellm/issues/7746

* feat(key_management_endpoints.py): support writing new key alias to aws secret manager - on key rotation

adds rotation endpoint to aws key management hook - allows for rotated litellm virtual keys with new key alias to be written to it

* feat(key_management_event_hooks.py): support rotating keys and updating secret manager

* refactor(base_secret_manager.py): support rotate secret at the base level

since it's just an abstraction function, it's easy to implement at the base manager level

* style: cleanup unused imports
2025-01-14 17:04:01 -08:00
Krish Dholakia
d7a13ad561 Support temporary budget increases on keys (#7754)
* fix(gpt_transformation.py): fix response_format translation check for 4o models

Fixes https://github.com/BerriAI/litellm/issues/7616

* feat(key_management_endpoints.py): support 'temp_budget_increase' and 'temp_budget_expiry' fields

Allow proxy admin to grant temporary budget increases to keys

* fix(proxy/_types.py): enforce temp_budget_increase and temp_budget_expiry are always passed together

* feat(user_api_key_auth.py): initial working temp budget increase logic

ensures key budget exceeded error checks for temp budget in key metadata

* feat(proxy_server.py): return the key max budget and key spend in the response headers

Allows clientside user to know their remaining limits

* test: add unit testing for new proxy utils

Ensures new key budget is correctly handled

* docs(temporary_budget_increase.md): add doc on temporary budget increase

* fix(utils.py): remove 3.5 from response_format check for now

not all azure  3.5 models support response_format

* fix(user_api_key_auth.py): return valid user api key auth object on all paths
2025-01-14 17:03:11 -08:00