Commit graph

508 commits

Author SHA1 Message Date
Ishaan Jaff
de473bee4b fix mypy linting errors 2025-03-12 12:13:19 -07:00
Ishaan Jaff
342741ede1 Merge branch 'main' into litellm_responses_api_support 2025-03-12 12:04:12 -07:00
Ishaan Jaff
c6a9e8cafe typing_extensions Annotated 2025-03-12 11:58:56 -07:00
Ishaan Jaff
e6b696370b BaseLiteLLMOpenAIResponseObject 2025-03-12 11:39:51 -07:00
Ishaan Jaff
fde75a068a working streaming logging 2025-03-12 00:02:39 -07:00
Ishaan Jaff
4ff6e41c15 ResponsesAPIStreamEvents 2025-03-11 23:42:35 -07:00
Ishaan Jaff
51dc24a405 _transform_response_api_usage_to_chat_usage 2025-03-11 22:26:44 -07:00
Ishaan Jaff
8ada5a469d add responses api to call types 2025-03-11 22:02:48 -07:00
Ishaan Jaff
24cb83b0e4 Response API cost tracking 2025-03-11 22:02:14 -07:00
Ishaan Jaff
8da714104b ResponsesAPIStreamingResponse 2025-03-11 17:48:15 -07:00
Ishaan Jaff
2ac5aa2477 ResponsesAPIOptionalRequestParams 2025-03-11 17:36:06 -07:00
Ishaan Jaff
eafc1be132 add ResponsesAPIResponse 2025-03-11 16:46:28 -07:00
Ishaan Jaff
0f8de3d0a5 add transform_request for OpenAI responses API 2025-03-11 16:33:26 -07:00
Ishaan Jaff
401a52e694 working transform 2025-03-11 15:24:42 -07:00
Ishaan Jaff
4b1b87eb67 openai reasoning initial types 2025-03-11 14:28:47 -07:00
omrishiv
0674491386
add support for Amazon Nova Canvas model (#7838)
* add initial support for Amazon Nova Canvas model

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* adjust name to AmazonNovaCanvas and map function variables to config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* tighten model name check

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* fix quality mapping

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add premium quality in config

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* support all Amazon Nova Canvas tasks

* remove unused import

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add tests for image generation tasks and fix payload

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* add missing util file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* update model prices backup file

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

* remove image tasks other than text->image

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

---------

Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
2025-03-10 08:02:00 -07:00
Krrish Dholakia
574f5056c8 fix(utils.py): fix linting error 2025-03-09 20:47:12 -07:00
5aaee9
42b7921ca1
fix: perplexity return both delta and message cause OpenWebUI repect text (#9081) 2025-03-09 19:46:31 -07:00
Krish Dholakia
4330ef8e81
Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 42s
* feat(batches/): fix batch cost calculation - ensure it's accurate

use the correct cost value - prev. defaulting to non-batch cost

* feat(batch_utils.py): log batch models to spend logs + standard logging payload

makes it easy to understand how cost was calculated

* fix: fix stored payload for test

* test: fix test
2025-03-08 11:47:25 -08:00
Krish Dholakia
0e3caf92b9
UI - new API Playground for testing LiteLLM translation (#9073)
* feat: initial commit - enable dev to see translated request

* feat(utils.py): expose new endpoint - `/utils/transform_request` to see the raw request sent by litellm

* feat(transform_request.tsx): allow user to see their transformed request

* refactor(litellm_logging.py): return raw request in 3 parts - api_base, headers, request body

easier to render each individually on UI vs. extracting from combined string

* feat: transform_request.tsx

working e2e raw request viewing

* fix(litellm_logging.py): fix transform viewing for bedrock models

* fix(litellm_logging.py): don't return sensitive headers in raw request headers

prevent accidental leak

* feat(transform_request.tsx): style improvements
2025-03-07 19:39:31 -08:00
Ishaan Jaff
b02af305de
[Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* if merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* stash changes

* working merge_reasoning_content_in_choices with bedrock

* fix litellm_params accessor

* fix streaming handler

* merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* test_bedrock_stream_thinking_content_openwebui

* merge_reasoning_content_in_choices

* fix for _optional_combine_thinking_block_in_choices

* linting error fix
2025-03-06 18:32:58 -08:00
Ishaan Jaff
f47987e673
(Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013)
* anthropic_messages_handler v0

* fix /messages

* working messages with router methods

* test_anthropic_messages_handler_litellm_router_non_streaming

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* AnthropicMessagesConfig

* _handle_anthropic_messages_response_logging

* working with /v1/messages endpoint

* working /v1/messages endpoint

* refactor to use router factory function

* use aanthropic_messages

* use BaseConfig for Anthropic /v1/messages

* track api key, team on /v1/messages endpoint

* fix get_logging_payload

* BaseAnthropicMessagesTest

* align test config

* test_anthropic_messages_with_thinking

* test_anthropic_streaming_with_thinking

* fix - display anthropic url for debugging

* test_bad_request_error_handling

* test_anthropic_messages_router_streaming_with_bad_request

* fix ProxyException

* test_bad_request_error_handling_streaming

* use provider_specific_header

* test_anthropic_messages_with_extra_headers

* test_anthropic_messages_to_wildcard_model

* fix gcs pub sub test

* standard_logging_payload

* fix unit testing for anthopic /v1/messages support

* fix pass through anthropic messages api

* delete dead code

* fix anthropic pass through response

* revert change to spend tracking utils

* fix get_litellm_metadata_from_kwargs

* fix spend logs payload json

* proxy_pass_through_endpoint_tests

* TestAnthropicPassthroughBasic

* fix pass through tests

* test_async_vertex_proxy_route_api_key_auth

* _handle_anthropic_messages_response_logging

* vertex_credentials

* test_set_default_vertex_config

* test_anthropic_messages_litellm_router_non_streaming_with_logging

* test_ageneric_api_call_with_fallbacks_basic

* test__aadapter_completion
2025-03-06 00:43:08 -08:00
Krish Dholakia
f6535ae6ad
Support format param for specifying image type (#9019)
* fix(transformation.py): support a 'format' parameter for image's

allow user to specify mime type

* fix: pass mimetype via 'format' param

* feat(gemini/chat/transformation.py): support 'format' param for gemini

* fix(factory.py): support 'format' param on sync bedrock converse calls

* feat(bedrock/converse_transformation.py): support 'format' param for bedrock async calls

* refactor(factory.py): move to supporting 'format' param in base helper

ensures consistency in param support

* feat(gpt_transformation.py): filter out 'format' param

don't send invalid param to openai

* fix(gpt_transformation.py): fix translation

* fix: fix translation error
2025-03-05 19:52:53 -08:00
Krish Dholakia
ec4f665e29
Return signature on anthropic streaming + migrate to signature field instead of signature_delta [MINOR bump] (#9021)
* Fix missing signature_delta in thinking blocks when streaming from Claude 3.7 (#8797)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* test: update test to enforce signature found

* feat(refactor-signature-param-to-be-'signature'-instead-of-'signature_delta'): keeps it in sync with anthropic

* fix: fix linting error

---------

Co-authored-by: Martin Krasser <krasserm@googlemail.com>
2025-03-05 19:33:54 -08:00
Krish Dholakia
5e386c28b2
Litellm dev 03 04 2025 p3 (#8997)
* fix(core_helpers.py): handle litellm_metadata instead of 'metadata'

* feat(batches/): ensure batches logs are written to db

makes batches response dict compatible

* fix(cost_calculator.py): handle batch response being a dictionary

* fix(batches/main.py): modify retrieve endpoints to use @client decorator

enables logging to work on retrieve call

* fix(batches/main.py): fix retrieve batch response type to be 'dict' compatible

* fix(spend_tracking_utils.py): send unique uuid for retrieve batch call type

create batch and retrieve batch share the same id

* fix(spend_tracking_utils.py): prevent duplicate retrieve batch calls from being double counted

* refactor(batches/): refactor cost tracking for batches - do it on retrieve, and within the established litellm_logging pipeline

ensures cost is always logged to db

* fix: fix linting errors

* fix: fix linting error
2025-03-04 21:58:03 -08:00
Ishaan Jaff
42931638df
(bug fix) - Fix Cache Health Check for Redis when redis_version is float (#8979)
* fix allow flexible types for redis version

* test_cache_ping_with_redis_version_float

* test_cache_ping_with_redis_version_float
2025-03-04 21:26:18 -08:00
Krish Dholakia
662c59adcf
Support caching on reasoning content + other fixes (#8973)
* fix(factory.py): pass on anthropic thinking content from assistant call

* fix(factory.py): fix anthropic messages to handle thinking blocks

Fixes https://github.com/BerriAI/litellm/issues/8961

* fix(factory.py): fix bedrock handling for assistant content in messages

Fixes https://github.com/BerriAI/litellm/issues/8961

* feat(convert_dict_to_response.py): handle reasoning content + thinking blocks in chat completion block

ensures caching works for anthropic thinking block

* fix(convert_dict_to_response.py): pass all message params to delta block

ensures streaming delta also contains the reasoning content / thinking block

* test(test_prompt_factory.py): remove redundant test

anthropic now supports assistant as the first message

* fix(factory.py): fix linting errors

* fix: fix code qa

* test: remove falsy test

* fix(litellm_logging.py): fix str conversion
2025-03-04 21:12:16 -08:00
Ishaan Jaff
ee7cd60fdb Revert "(bug fix) - don't log messages, prompt, input in model_parameters in StandardLoggingPayload (#8923)"
This reverts commit a119cb420b.
2025-03-01 11:05:33 -08:00
Ishaan Jaff
a119cb420b
(bug fix) - don't log messages, prompt, input in model_parameters in StandardLoggingPayload (#8923)
* fix _get_model_parameters

* test litellm logging

* test litellm logging
2025-03-01 10:27:24 -08:00
Ishaan Jaff
3a086cee06
(Feat) - Show Error Logs on LiteLLM UI (#8904)
* fix test_moderations_bad_model

* use async_post_call_failure_hook

* basic logging errors in DB

* show status on ui

* show status on ui

* ui show request / response side by side

* stash fixes

* working, track raw request

* track error info in metadata

* fix showing error / request / response logs

* show traceback on error viewer

* ui with traceback of error

* fix async_post_call_failure_hook

* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads

* test_get_error_information

* fix code quality

* rename proxy track cost callback test

* _should_store_errors_in_spend_logs

* feature flag error logs

* Revert "_should_store_errors_in_spend_logs"

This reverts commit 7f345df477.

* Revert "feature flag error logs"

This reverts commit 0e90c022bb.

* test_spend_logs_payload

* fix OTEL log_db_metrics

* fix import json

* fix ui linting error

* test_async_post_call_failure_hook

* test_chat_completion_bad_model_with_spend_logs

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-02-28 20:10:09 -08:00
Krish Dholakia
88eedb22b9
vertex ai anthropic thinking param support (#8853)
* fix(vertex_llm_base.py): handle credentials passed in as dictionary

* fix(router.py): support vertex credentials as json dict

* test(test_vertex.py): allows easier testing

mock anthropic thinking response for vertex ai

* test(vertex_ai_partner_models/): don't remove "@" from model

breaks anthropic cost calculation

* test: move testing

* fix: fix linting error

* fix: fix linting error

* fix(vertex_ai_partner_models/main.py): split @ for codestral model

* test: fix test

* fix: fix stripping "@" on mistral models

* fix: fix test

* test: fix test
2025-02-26 21:37:18 -08:00
Krish Dholakia
ab7c4d1a0e
Litellm dev bedrock anthropic 3 7 v2 (#8843)
* feat(bedrock/converse/transformation.py): support claude-3-7-sonnet reasoning_Content transformation

Closes https://github.com/BerriAI/litellm/issues/8777

* fix(bedrock/): support returning `reasoning_content` on streaming for claude-3-7

Resolves https://github.com/BerriAI/litellm/issues/8777

* feat(bedrock/): unify converse reasoning content blocks for consistency across anthropic and bedrock

* fix(anthropic/chat/transformation.py): handle deepseek-style 'reasoning_content' extraction within transformation.py

simpler logic

* feat(bedrock/): fix streaming to return blocks in consistent format

* fix: fix linting error

* test: fix test

* feat(factory.py): fix bedrock thinking block translation on tool calling

allows passing the thinking blocks back to bedrock for tool calling

* fix(types/utils.py): don't exclude provider_specific_fields on model dump

ensures consistent responses

* fix: fix linting errors

* fix(convert_dict_to_response.py): pass reasoning_content on root

* fix: test

* fix(streaming_handler.py): add helper util for setting model id

* fix(streaming_handler.py): fix setting model id on model response stream chunk

* fix(streaming_handler.py): fix linting error

* fix(streaming_handler.py): fix linting error

* fix(types/utils.py): add provider_specific_fields to model stream response

* fix(streaming_handler.py): copy provider specific fields and add them to the root of the streaming response

* fix(streaming_handler.py): fix check

* fix: fix test

* fix(types/utils.py): ensure messages content is always openai compatible

* fix(types/utils.py): fix delta object to always be openai compatible

only introduce new params if variable exists

* test: fix bedrock nova tests

* test: skip flaky test

* test: skip flaky test in ci/cd
2025-02-26 16:05:33 -08:00
Krish Dholakia
017c482d7b
fix(o_series_transformation.py): fix optional param check for o-serie… (#8787)
* fix(o_series_transformation.py): fix optional param check for o-series models

o3-mini and o-1 do not support parallel tool calling

* fix(utils.py): support 'drop_params' for 'thinking' param across models

allows switching to older claude versions (or non-anthropic models) and param to be safely dropped

* fix: fix passing thinking param in optional params

allows dropping thinking_param where not applicable

* test: update old model

* fix(utils.py): fix linting errors

* fix(main.py): add param to acompletion
2025-02-26 12:26:55 -08:00
Krish Dholakia
142b195784
Add anthropic thinking + reasoning content support (#8778)
* feat(anthropic/chat/transformation.py): add anthropic thinking param support

* feat(anthropic/chat/transformation.py): support returning thinking content for anthropic on streaming responses

* feat(anthropic/chat/transformation.py): return list of thinking blocks (include block signature)

allows usage in tool call responses

* fix(types/utils.py): extract and map reasoning_content from anthropic as content str

* test: add testing to ensure thinking_blocks are returned at the root

* fix(anthropic/chat/handler.py): return thinking blocks on streaming - include signature

* feat(factory.py): handle anthropic thinking blocks translation if in assistant response

* test: handle openai internal instability

* test: handle openai audio instability

* ci: pin anthropic dep

* test: handle openai audio instability

* fix: fix linting error

* refactor(anthropic/chat/transformation.py): refactor function to remain <50 LOC

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error

* fix: fix linting error
2025-02-24 21:54:30 -08:00
Krish Dholakia
9914c166b7
Litellm contributor prs 02 24 2025 (#8775)
* Adding VertexAI Claude 3.7 Sonnet (#8774)

Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>

* build(model_prices_and_context_window.json): add anthropic 3-7 models on vertex ai and bedrock

* Support video_url (#8743)

* Support video_url

Support VLMs that works with video.
Example implemenation in vllm: https://github.com/vllm-project/vllm/pull/10020

* llms openai.py: Add ChatCompletionVideoObject

Add data structures to support `video_url` in chat completion

* test test_completion.py: add test for video_url

* Arize Phoenix - ensure correct endpoint/protocol are used; and default to phoenix cloud (#8750)

* minor fixes to default to http and to ensure that the correct endpoint is used

* Update test_arize_phoenix.py

* prioritize http over grpc

---------

Co-authored-by: Emerson Gomes <emerson.gomes@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Pang Wu <104795337+pang-wu@users.noreply.github.com>
Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-24 18:55:48 -08:00
Krish Dholakia
09462ba80c
Add cohere v2/rerank support (#8421) (#8605)
* Add cohere v2/rerank support (#8421)

* Support v2 endpoint cohere rerank

* Add tests and docs

* Make v1 default if old params used

* Update docs

* Update docs pt 2

* Update tests

* Add e2e test

* Clean up code

* Use inheritence for new config

* Fix linting issues (#8608)

* Fix cohere v2 failing test + linting (#8672)

* Fix test and unused imports

* Fix tests

* fix: fix linting errors

* test: handle tgai instability

* fix: skip service unavailable err

* test: print logs for unstable test

* test: skip unreliable tests

---------

Co-authored-by: vibhavbhat <vibhavb00@gmail.com>
2025-02-22 22:25:29 -08:00
Krish Dholakia
21ea52105a
Support arize phoenix on litellm proxy (#7756) (#8715)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
* Update opentelemetry.py

wip

* Update test_opentelemetry_unit_tests.py

* fix a few paths and tests

* fix path

* Update litellm_logging.py

* accidentally removed code

* Add type for protocol

* Add and update tests

* minor changes

* update and add additional arize phoenix test

* update existing test

* address feedback

* use standard_logging_object

* address feedback

Co-authored-by: Nate Mar <67926244+nate-mar@users.noreply.github.com>
2025-02-22 20:55:11 -08:00
Krish Dholakia
251467a525
add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684)
* build(model_prices_and_context_window.json): mark bedrock llama as supporting vision based on docs

* Add price for Cerebras llama3.3-70b (#8676)

* docs(readme.md): fix contributing docs

point people to new mock directory testing structure s/o @vibhavbhat

* build: update contributing readme

* docs(readme.md): improve docs

* docs(readme.md): cleanup readme on tests/

* docs(README.md): cleanup doc

* feat(infinity/): support returning documents when return_documents=True

* test(test_rerank.py): add e2e testing for cohere rerank

* fix: fix linting errors

* fix(together_ai/): fix together ai transformation

* fix: fix linting error

* fix: fix linting errors

* fix: fix linting errors

* test: mark cohere as flaky

* build: fix model supports check

* test: fix test

* test: mark flaky test

* fix: fix test

* test: fix test

---------

Co-authored-by: Yury Koleda <fut.wrk@gmail.com>
2025-02-20 21:23:54 -08:00
Ishaan Jaff
bb6f43d12e
(Bug fix) - Cache Health not working when configured with prometheus service logger (#8687)
* fix serialize on safe json dumps

* test_non_standard_dict_keys_complex

* ui fix HealthCheckCacheParams

* fix HealthCheckCacheParams

* fix code qa

* test_cache_ping_failure

* test_cache_ping_health_check_includes_only_cache_attributes

* test_cache_ping_health_check_includes_only_cache_attributes
2025-02-20 13:41:56 -08:00
Krish Dholakia
982ee4b96b
LiteLLM Contributor PRs (02/18/2025). (#8643)
* fix: prisma migration script logging #6991 (#8617)

* fix: prisma migration script logging #6991

* chore: refactor usng proxy logger

* remove noqa and move import below abs path

* Fix(litellm-vetexai-gemini): adding the supported files as per gemini documentation (#8559)

* bugfix(litellm-vetexai-gemini): adding the supported files as per gemini documentations

* fix(files.py): correct file type constant from 'text' to 'TXT'

* test: handle internal server error

---------

Co-authored-by: Justin Law <81255462+justinthelaw@users.noreply.github.com>
Co-authored-by: alymedhat10 <48028013+alymedhat10@users.noreply.github.com>
2025-02-19 21:52:46 -08:00
Krrish Dholakia
9470f57e86 build: extract <think>..</think> block for amazon deepseek r1 and put in reasoning_content 2025-02-19 21:10:38 -08:00
Ishaan Jaff
fff15543d9
(UI + Proxy) Cache Health Check Page - Cleanup/Improvements (#8665)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
* fixes for redis cache ping serialization

* fix cache ping check

* fix cache health check ui

* working error details on ui

* ui expand / collapse error

* move cache health check to diff file

* fix displaying error from cache health check

* ui allow copying errors

* ui cache health fixes

* show redis details

* clean up cache health page

* ui polish fixes

* fix error handling on cache health page

* fix redis_cache_params on cache ping response

* error handling

* cache health ping response

* fx error response from cache ping

* parsedLitellmParams

* fix cache health check

* fix cache health page

* cache safely handle json dumps issues

* test caching routes

* test_primitive_types

* fix caching routes

* litellm_mapped_tests

* fix pytest-mock

* fix _serialize

* fix linting on safe dumps

* test_default_max_depth

* pip install "pytest-mock==3.12.0"

* litellm_mapped_tests_coverage

* add readme on new litellm test dir
2025-02-19 19:08:50 -08:00
Ishaan Jaff
b88762b63c
(Polish/Fixes) - Fixes for Adding Team Specific Models (#8645)
* refactor get model info for team models

* allow adding a model to a team when creating team specific model

* ui update selected Team on Team Dropdown

* test_team_model_association

* testing for team specific models

* test_get_team_specific_model

* test: skip on internal server error

* remove model alias card on teams page

* linting fix _get_team_specific_model

* fix DeploymentTypedDict

* fix linting error

* fix code quality

* fix model info checks

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-02-18 21:11:57 -08:00
Ishaan Jaff
d1ba04d9d9
[Feature]: Redis Caching - Allow setting a namespace for redis cache (#8624)
* use _add_namespace_to_cache_key

* fix cache_control_args

* test_redis_caching_multiple_namespaces

* test_add_namespace_to_cache_key

* test_redis_caching_multiple_namespaces

* docs redis name space

* test_add_namespace_to_cache_key
2025-02-18 14:47:34 -08:00
Ishaan Jaff
8024300825
(UI) Improvements to Add Team Model Flow (#8603)
* ui - use common team dropdown component

* re-use team component

* rename org field on add model

* handle add model submit

* working view model_id and team_id on root models page

* cleaner

* show all fields

* working model info view

* working team info selector

* clean up team id

* new component for model dashboard

* ui show table with dropdown

* make public model names like email

* revert changes to litellm model name

* fix litellm model name

* ui fix public model

* fix mappings

* fix conditional text input

* fix message

* ui fix bulk add models

* _add_team_model_to_db

* move model mgmt helper funcs

* test_add_team_model_to_db

* ui - display model team model name

* fix add model tab

* fix remove redundant info tab on models page

* dont pass model mappings all the way through

* fix jarring model name when adding team models

* fix edit model button

* delete button on model info

* ui fix model dashboard

* fix DeploymentTypedDict

* _is_model_access_group_for_wildcard_route

* test _get_public_model_name

* ui fix viewing public model name

* fix linting error

* fix linting errors

* fix selectedModel logic
2025-02-17 18:37:14 -08:00
Ishaan Jaff
2753de1458
(Bug Fix + Better Observability) - BudgetResetJob: (#8562)
* use class ResetBudgetJob

* refactor reset budget job

* update reset_budget job

* refactor reset budget job

* fix LiteLLM_UserTable

* refactor reset budget job

* add telemetry for reset budget job

* dd - log service success/failure on DD

* add detailed reset budget reset info on DD

* initialize_scheduled_background_jobs

* refactor reset budget job

* trigger service failure hook when fails to reset a budget for team, key, user

* fix resetBudgetJob

* unit testing for ResetBudgetJob

* test_duration_in_seconds_basic

* testing for triggering service logging

* fix logs on test teams fail

* remove unused imports

* fix import duration in s

* duration_in_seconds
2025-02-15 16:13:08 -08:00
Krish Dholakia
a9276f27f9
fix(main.py): fix key leak error when unknown provider given (#8556)
* fix(main.py): fix key leak error when unknown provider given

don't return passed in args if unknown route on embedding

* fix(main.py): remove instances of {args} being passed in exception

prevent potential key leaks

* test(code_coverage/prevent_key_leaks_in_codebase.py): ban usage of {args} in codebase

* fix: fix linting errors

* fix: remove unused variable
2025-02-15 14:02:55 -08:00
Ishaan Jaff
e59376d6fc fix linting 2025-02-14 21:42:51 -08:00
Ishaan Jaff
68ae888730 fix test 2025-02-14 21:32:37 -08:00
Ishaan Jaff
946bc1e3fa fix use mock tests for fine tuning api requests to openai 2025-02-14 21:20:20 -08:00