* feat(main.py): mock_response() - support 'litellm.ContextWindowExceededError' in mock response
enabled quicker router/fallback/proxy debug on context window errors
* feat(exception_mapping_utils.py): extract special litellm errors from error str if calling `litellm_proxy/` as provider
Closes https://github.com/BerriAI/litellm/issues/7259
* fix(user_api_key_auth.py): specify 'Received Proxy Server Request' is span kind server
Closes https://github.com/BerriAI/litellm/issues/7298
* build(model_prices_and_context_window.json): update groq models to specify 'supports_vision' parameter
Closes https://github.com/BerriAI/litellm/issues/7433
* docs(groq.md): add groq vision example to docs
Closes https://github.com/BerriAI/litellm/issues/7433
* fix(prometheus.py): refactor self.litellm_proxy_failed_requests_metric to use label factory
* feat(prometheus.py): new 'litellm_proxy_failed_requests_by_tag_metric'
allows tracking failed requests by tag on proxy
* fix(prometheus.py): fix exception logging
* feat(prometheus.py): add new 'litellm_request_total_latency_by_tag_metric'
enables tracking latency by use-case
* feat(prometheus.py): add new llm api latency by tag metric
* feat(prometheus.py): new litellm_deployment_latency_per_output_token_by_tag metric
allows tracking deployment latency by tag
* fix(prometheus.py): refactor 'litellm_requests_metric' to use enum values + label factory
* feat(prometheus.py): new litellm_proxy_total_requests_by_tag metric
allows tracking total requests by tag
* feat(prometheus.py): new metric litellm_deployment_successful_fallbacks_by_tag
allows tracking deployment fallbacks by tag
* fix(prometheus.py): new 'litellm_deployment_failed_fallbacks_by_tag' metric
allows tracking failed fallbacks on deployment by custom tag
* test: fix test
* test: rename test to run earlier
* test: skip flaky test
* feat(proxy/utils.py): get associated litellm budget from db in combined_view for key
allows user to create rate limit tiers and associate those to keys
* feat(proxy/_types.py): update the value of key-level tpm/rpm/model max budget metrics with the associated budget table values if set
allows rate limit tiers to be easily applied to keys
* docs(rate_limit_tiers.md): add doc on setting rate limit / budget tiers
make feature discoverable
* feat(key_management_endpoints.py): return litellm_budget_table value in key generate
make it easy for user to know associated budget on key creation
* fix(key_management_endpoints.py): document 'budget_id' param in `/key/generate`
* docs(key_management_endpoints.py): document budget_id usage
* refactor(budget_management_endpoints.py): refactor budget endpoints into separate file - makes it easier to run documentation testing against it
* docs(test_api_docs.py): add budget endpoints to ci/cd doc test + add missing param info to docs
* fix(customer_endpoints.py): use new pydantic obj name
* docs(user_management_heirarchy.md): add simple doc explaining teams/keys/org/users on litellm
* Litellm dev 12 26 2024 p2 (#7432)
* (Feat) Add logging for `POST v1/fine_tuning/jobs` (#7426)
* init commit ft jobs logging
* add ft logging
* add logging for FineTuningJob
* simple FT Job create test
* (docs) - show all supported Azure OpenAI endpoints in overview (#7428)
* azure batches
* update doc
* docs azure endpoints
* docs endpoints on azure
* docs azure batches api
* docs azure batches api
* fix(key_management_endpoints.py): fix key update to actually work
* test(test_key_management.py): add e2e test asserting ui key update call works
* fix: proxy/_types - fix linting erros
* test: update test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix: test
* fix(parallel_request_limiter.py): enforce tpm/rpm limits on key from tiers
* fix: fix linting errors
* test: fix test
* fix: remove unused import
* test: update test
* docs(customer_endpoints.py): document new model_max_budget param
* test: specify unique key alias
* docs(budget_management_endpoints.py): document new model_max_budget param
* test: fix test
* test: fix tests
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* refactor(prometheus.py): refactor to use a factory method for setting label values
allows for enforcing end user id disabling on prometheus e2e
* fix: fix linting error
* fix(prometheus.py): ensure label factory drops end-user value if disabled by user
* fix(prometheus.py): specify service_type in end user tracking get
* test: fix test
* test: add unit test for prometheus factory
* test: improve test (cover flag not set scenario)
* test(test_prometheus.py): e2e test covering if 'end_user_id' shows up in testing if disabled
scrapes the `/metrics` endpoint and scans text to check if id appears in emitted metrics
* fix(prometheus.py): stringify status code before logging it
* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema'
Closes https://github.com/BerriAI/litellm/issues/7397
* refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose
* refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic
enables simpler unit testing for config-based team callbacks
* fix(proxy/_types.py): handle teamcallbackmetadata - none values
drop none values if present. if all none, use default dict to avoid downstream errors
* test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls
Fixes https://github.com/BerriAI/litellm/issues/6787
* fix(langfuse_prompt_management.py): add success + failure logging event support
* fix: fix linting error
* test: fix test
* test: fix test
* test: override o1 prompt caching - openai currently not working
* test: fix test
* fix(invoke_handler.py): fix mock response iterator to handle tool calling
returns tool call if returned by model response
* fix(prometheus.py): add new 'tokens_by_tag' metric on prometheus
allows tracking 'token usage' by task
* feat(prometheus.py): add input + output token tracking by tag
* feat(prometheus.py): add tag based deployment failure tracking
allows admin to track failure by use-case
* fix(prometheus.py): support streaming end user litellm_proxy_total_requests_metric tracking
* fix(prometheus.py): add 'requested_model' and 'end_user_id' to 'litellm_request_total_latency_metric_bucket'
enables latency tracking by end user + requested model
* fix(prometheus.py): add end user, user and requested model metrics to 'litellm_llm_api_latency_metric'
* test: update prometheus unit tests
* test(test_prometheus.py): update tests
* test(test_prometheus.py): fix test
* test: reorder test
* fix(proxy_track_cost_callback.py): log to db if only end user param given
* fix: allows for jwt-auth based end user id spend tracking to work
* fix(utils.py): fix 'get_end_user_id_for_cost_tracking' to use 'user_api_key_end_user_id'
more stable - works with jwt-auth based end user tracking as well
* test(test_jwt.py): add e2e unit test to confirm end user cost tracking works for spend logs
* test: update test to use end_user api key hash param
* fix(langfuse.py): support end user cost tracking via jwt auth + langfuse
logs end user to langfuse if decoded from jwt token
* fix: fix linting errors
* test: fix test
* test: fix test
* fix: fix end user id extraction
* fix: run test earlier
* add unit test for test_datadog_static_methods
* docs dd vars
* test_datadog_payload_environment_variables
* test_datadog_static_methods
* docs env vars
* fix table
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
* feat(cohere-+-clarifai): refactor integrations to use common base config class
* fix: fix linting errors
* refactor(anthropic/): move anthropic + vertex anthropic to use base config
* test: fix xai test
* test: fix tests
* fix: fix linting errors
* test: comment out WIP test
* fix(transformation.py): fix is pdf used check
* fix: fix linting error
* feat(langfuse/): support langfuse prompt management
Initial working commit for langfuse prompt management support
Closes https://github.com/BerriAI/litellm/issues/6269
* test: update test
* fix(litellm_logging.py): suppress linting error
* fix(edit_budget_modal.tsx): call `/budget/update` endpoint instead of `/budget/new`
allows updating existing budget on ui
* fix(user_api_key_auth.py): support cost tracking for end user via jwt field
* fix(presidio.py): support pii masking on sync logging callbacks
enables masking before logging to langfuse
* feat(utils.py): support retry policy logic inside '.completion()'
Fixes https://github.com/BerriAI/litellm/issues/6623
* fix(utils.py): support retry by retry policy on async logic as well
* fix(handle_jwt.py): set leeway default leeway value
* test: fix test to handle jwt audience claim
* fix(cost_calculator.py): move to using `.get_model_info()` for cost per token calculations
ensures cost tracking is reliable - handles edge cases of parsing model cost map
* build(model_prices_and_context_window.json): add 'supports_response_schema' for select tgai models
Fixes https://github.com/BerriAI/litellm/pull/7037#discussion_r1872157329
* build(model_prices_and_context_window.json): remove 'pdf input' and 'vision' support from nova micro in model map
Bedrock docs indicate no support for micro - https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html
* fix(converse_transformation.py): support amazon nova tool use
* fix(opentelemetry): Add missing LLM request type attribute to spans (#7041)
* feat(opentelemetry): add LLM request type attribute to spans
* lint
* fix: curl usage (#7038)
curl -d, --data <data> is lowercase d
curl -D, --dump-header <filename> is uppercase D
references:
https://curl.se/docs/manpage.html#-dhttps://curl.se/docs/manpage.html#-D
* fix(spend_tracking.py): handle empty 'id' in model response - when creating spend log
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(streaming_chunk_builder.py): handle initial id being empty string
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(anthropic_passthrough_logging_handler.py): add end user cost tracking for anthropic pass through endpoint
* docs(pass_through/): refactor docs location + add table on supported features for pass through endpoints
* feat(anthropic_passthrough_logging_handler.py): support end user cost tracking via anthropic sdk
* docs(anthropic_completion.md): add docs on passing end user param for cost tracking on anthropic sdk
* fix(litellm_logging.py): use standard logging payload if present in kwargs
prevent datadog logging error for pass through endpoints
* docs(bedrock.md): add rerank api usage example to docs
* bugfix/change dummy tool name format (#7053)
* fix viewing keys (#7042)
* ui new build
* build(model_prices_and_context_window.json): add bedrock region models to model cost map (#7044)
* bye (#6982)
* (fix) litellm router.aspeech (#6962)
* doc Migrating Databases
* fix aspeech on router
* test_audio_speech_router
* test_audio_speech_router
* docs show supported providers on batches api doc
* change dummy tool name format
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix: fix linting errors
* test: update test
* fix(litellm_logging.py): fix pass through check
* fix(test_otel_logging.py): fix test
* fix(cost_calculator.py): update handling for cost per second
* fix(cost_calculator.py): fix cost check
* test: fix test
* (fix) adding public routes when using custom header (#7045)
* get_api_key_from_custom_header
* add test_get_api_key_from_custom_header
* fix testing use 1 file for test user api key auth
* fix test user api key auth
* test_custom_api_key_header_name
* build: update ui build
---------
Co-authored-by: Doron Kopit <83537683+doronkopit5@users.noreply.github.com>
Co-authored-by: lloydchang <lloydchang@gmail.com>
Co-authored-by: hgulersen <haymigulersen@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(key_management_endpoints.py): override metadata field value on update
allow user to override tags
* feat(__init__.py): expose new disable_end_user_cost_tracking_prometheus_only metric
allow disabling end user cost tracking on prometheus - fixes cardinality issue
* fix(litellm_pre_call_utils.py): add key/team level enforced params
Fixes https://github.com/BerriAI/litellm/issues/6652
* fix(key_management_endpoints.py): allow user to pass in `enforced_params` as a top level param on /key/generate and /key/update
* docs(enterprise.md): add docs on enforcing required params for llm requests
* Add support of Galadriel API (#7005)
* fix(router.py): robust retry after handling
set retry after time to 0 if >0 healthy deployments. handle base case = 1 deployment
* test(test_router.py): fix test
* feat(bedrock/): add support for 'nova' models
also adds explicit 'converse/' route for simpler routing
* fix: fix 'supports_pdf_input'
return if model supports pdf input on get_model_info
* feat(converse_transformation.py): support bedrock pdf input
* docs(document_understanding.md): add document understanding to docs
* fix(litellm_pre_call_utils.py): fix linting error
* fix(init.py): fix passing of bedrock converse models
* feat(bedrock/converse): support 'response_format={"type": "json_object"}'
* fix(converse_handler.py): fix linting error
* fix(base_llm_unit_tests.py): fix test
* fix: fix test
* test: fix test
* test: fix test
* test: remove duplicate test
---------
Co-authored-by: h4n0 <4738254+h4n0@users.noreply.github.com>
* fix get_standard_logging_object_payload
* fix async_post_call_failure_hook
* fix post_call_failure_hook
* fix change
* fix _is_proxy_only_error
* fix async_post_call_failure_hook
* fix getting request body
* remove redundant code
* use a well named original function name for auth errors
* fix logging auth fails on DD
* fix using request body
* use helper for _handle_logging_proxy_only_error
* add async_log_failure_event for dd
* use standard logging payload for DD logging
* use standard logging payload for DD
* fix use SLP status
* allow opting into _create_v0_logging_payload
* add unit tests for DD logging payload
* fix dd logging tests
* feat(pass_through_endpoints/): support logging anthropic/gemini pass through calls to langfuse/s3/etc.
* fix(utils.py): allow disabling end user cost tracking with new param
Allows proxy admin to disable cost tracking for end user - keeps prometheus metrics small
* docs(configs.md): add disable_end_user_cost_tracking reference to docs
* feat(key_management_endpoints.py): add support for restricting access to `/key/generate` by team/proxy level role
Enables admin to restrict key creation, and assign team admins to handle distributing keys
* test(test_key_management.py): add unit testing for personal / team key restriction checks
* docs: add docs on restricting key creation
* docs(finetuned_models.md): add new guide on calling finetuned models
* docs(input.md): cleanup anthropic supported params
Closes https://github.com/BerriAI/litellm/issues/6856
* test(test_embedding.py): add test for passing extra headers via embedding
* feat(cohere/embed): pass client to async embedding
* feat(rerank.py): add `/v1/rerank` if missing for cohere base url
Closes https://github.com/BerriAI/litellm/issues/6844
* fix(main.py): pass extra_headers param to openai
Fixes https://github.com/BerriAI/litellm/issues/6836
* fix(litellm_logging.py): don't disable global callbacks when dynamic callbacks are set
Fixes issue where global callbacks - e.g. prometheus were overriden when langfuse was set dynamically
* fix(handler.py): fix linting error
* fix: fix typing
* build: add conftest to proxy_admin_ui_tests/
* test: fix test
* fix: fix linting errors
* test: fix test
* fix: fix pass through testing
* fix(caching): convert arg to equivalent kwargs in llm caching handler
prevent unexpected errors
* fix(caching_handler.py): don't pass args to caching
* fix(caching): remove all *args from caching.py
* fix(caching): consistent function signatures + abc method
* test(caching_unit_tests.py): add unit tests for llm caching
ensures coverage for common caching scenarios across different implementations
* refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one
* fix(router.py): drop redis password requirement
* fix(proxy_server.py): fix faulty slack alerting check
* fix(langfuse.py): avoid copying functions/thread lock objects in metadata
fixes metadata copy error when parent otel span in metadata
* test: update test
* add langsmith_api_key to StandardCallbackDynamicParams
* create a file for langsmith types
* langsmith add key / team based logging
* add key based logging for langsmith
* fix langsmith key based logging
* fix linting langsmith
* remove NOQA violation
* add unit test coverage for all helpers in test langsmith
* test_langsmith_key_based_logging
* docs langsmith key based logging
* run langsmith tests in logging callback tests
* fix logging testing
* test_langsmith_key_based_logging
* test_add_callback_via_key_litellm_pre_call_utils_langsmith
* add debug statement langsmith key based logging
* test_langsmith_key_based_logging
* fix(__init__.py): add 'watsonx_text' as mapped llm api route
Fixes https://github.com/BerriAI/litellm/issues/6663
* fix(opentelemetry.py): fix passing parallel tool calls to otel
Fixes https://github.com/BerriAI/litellm/issues/6677
* refactor(test_opentelemetry_unit_tests.py): create a base set of unit tests for all logging integrations - test for parallel tool call handling
reduces bugs in repo
* fix(__init__.py): update provider-model mapping to include all known provider-model mappings
Fixes https://github.com/BerriAI/litellm/issues/6669
* feat(anthropic): support passing document in llm api call
* docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function
* fix(factory.py): fix linting error
* use CustomBatchLogger for GCS
* add GCS bucket logging type
* use batch logging for GCs bucket
* add gcs_bucket
* allow setting flush_interval on CustomBatchLogger
* set GCS_FLUSH_INTERVAL to 1s
* fix test_key_logging
* fix test_key_logging
* add docs on new env vars
* fix(deepseek/chat): convert content list to str
Fixes https://github.com/BerriAI/litellm/issues/6642
* test(test_deepseek_completion.py): implement base llm unit tests
increase robustness across providers
* fix(router.py): support content policy violation fallbacks with default fallbacks
* fix(opentelemetry.py): refactor to move otel imports behing flag
Fixes https://github.com/BerriAI/litellm/issues/6636
* fix(opentelemtry.py): close span on success completion
* fix(user_api_key_auth.py): allow user_role to default to none
* fix: mark flaky test
* fix(opentelemetry.py): move otelconfig.from_env to inside the init
prevent otel errors raised just by importing the litellm class
* fix(user_api_key_auth.py): fix auth error
* log error on prometheus service failure hook
* use a more accurate function name for wrapper that handles logging db metrics
* fix log_db_metrics
* test_log_db_metrics_failure_error_types
* fix linting
* fix auth checks
* fix(pattern_match_deployments.py): default to user input if unable to map based on wildcards
* test: fix test
* test: reset test name
* test: update conftest to reload proxy server module between tests
* ci(config.yml): move langfuse out of local_testing
reduce ci/cd time
* ci(config.yml): cleanup langfuse ci/cd tests
* fix: update test to not use global proxy_server app module
* ci: move caching to a separate test pipeline
speed up ci pipeline
* test: update conftest to check if proxy_server attr exists before reloading
* build(conftest.py): don't block on inability to reload proxy_server
* ci(config.yml): update caching unit test filter to work on 'cache' keyword as well
* fix(encrypt_decrypt_utils.py): use function to get salt key
* test: mark flaky test
* test: handle anthropic overloaded errors
* refactor: create separate ci/cd pipeline for proxy unit tests
make ci/cd faster
* ci(config.yml): add litellm_proxy_unit_testing to build_and_test jobs
* ci(config.yml): generate prisma binaries for proxy unit tests
* test: readd vertex_key.json
* ci(config.yml): remove `-s` from proxy_unit_test cmd
speed up test
* ci: remove any 'debug' logging flag
speed up ci pipeline
* test: fix test
* test(test_braintrust.py): rerun
* test: add delay for braintrust test