* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix(factory.py): bedrock: merge consecutive tool + user messages
Fixes https://github.com/BerriAI/litellm/issues/6007
* LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023)
* feat(together_ai/completion): handle together ai completion calls
* fix: handle list of int / list of list of int for text completion calls
* fix(utils.py): check if base model in bedrock converse model list
Fixes https://github.com/BerriAI/litellm/issues/6003
* test(test_optional_params.py): add unit tests for bedrock optional param mapping
Fixes https://github.com/BerriAI/litellm/issues/6003
* feat(utils.py): enable passing dummy tool call for anthropic/bedrock calls if tool_use blocks exist
Fixes https://github.com/BerriAI/litellm/issues/5388
* fixed an issue with tool use of claude models with anthropic and bedrock (#6013)
* fix(utils.py): handle empty schema for anthropic/bedrock
Fixes https://github.com/BerriAI/litellm/issues/6012
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting errors
* fix(proxy_cli.py): fix import route for app + health checks path (#6026)
* (testing): Enable testing us.anthropic.claude-3-haiku-20240307-v1:0. (#6018)
* fix(proxy_cli.py): fix import route for app + health checks gettsburg.wav
Fixes https://github.com/BerriAI/litellm/issues/5999
---------
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
---------
Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
* fix(factory.py): correctly handle content in tool block
---------
Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
* init litellm langfuse / gcs credentials in litellm logging obj
* add gcs key based test
* rename vars
* save standard_callback_dynamic_params in model call details
* add working gcs bucket key based logging
* test_basic_gcs_logging_per_request
* linting fix
* add doc on gcs bucket team based logging
* track api key and team in prom latency metric
* add test for latency metric
* test prometheus success metrics for latency
* track team and key labels for deployment failures
* add test for litellm_deployment_failure_responses_total
* fix checks for premium user on prometheus
* log_success_fallback_event and log_failure_fallback_event
* log original_exception in log_success_fallback_event
* track key, team and exception status and class on fallback metrics
* use get_standard_logging_metadata
* fix import error
* track litellm_deployment_successful_fallbacks
* add test test_proxy_fallback_metrics
* add log log_success_fallback_event
* fix test prometheus
* LiteLLM Minor Fixes & Improvements (09/26/2024) (#5925)
* fix(litellm_logging.py): don't initialize prometheus_logger if non premium user
Prevents bad error messages in logs
Fixes https://github.com/BerriAI/litellm/issues/5897
* Add Support for Custom Providers in Vision and Function Call Utils (#5688)
* Add Support for Custom Providers in Vision and Function Call Utils Lookup
* Remove parallel function call due to missing model info param
* Add Unit Tests for Vision and Function Call Changes
* fix-#5920: set header value to string to fix "'int' object has no att… (#5922)
* LiteLLM Minor Fixes & Improvements (09/24/2024) (#5880)
* LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842)
* feat(auth_utils.py): enable admin to allow client-side credentials to be passed
Makes it easier for devs to experiment with finetuned fireworks ai models
* feat(router.py): allow setting configurable_clientside_auth_params for a model
Closes https://github.com/BerriAI/litellm/issues/5843
* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit
Fixes https://github.com/BerriAI/litellm/issues/5850
* fix(azure_ai/): support content list for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4237
* fix(litellm_logging.py): always set saved_cache_cost
Set to 0 by default
* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing
handles calling 405b+ size models
* fix(slack_alerting.py): fix error alerting for failed spend tracking
Fixes regression with slack alerting error monitoring
* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error
* docs(bedrock.md): add llama3-1 models
* test: fix tests
* fix(azure_ai/chat): fix transformation for azure ai calls
* feat(azure_ai/embed): Add azure ai embeddings support
Closes https://github.com/BerriAI/litellm/issues/5861
* fix(azure_ai/embed): enable async embedding
* feat(azure_ai/embed): support azure ai multimodal embeddings
* fix(azure_ai/embed): support async multi modal embeddings
* feat(together_ai/embed): support together ai embedding calls
* feat(rerank/main.py): log source documents for rerank endpoints to langfuse
improves rerank endpoint logging
* fix(langfuse.py): support logging `/audio/speech` input to langfuse
* test(test_embedding.py): fix test
* test(test_completion_cost.py): fix helper util
* fix-#5920: set header value to string to fix "'int' object has no attribute 'encode'"
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* Revert "fix-#5920: set header value to string to fix "'int' object has no att…" (#5926)
This reverts commit a554ae2695.
* build(model_prices_and_context_window.json): add azure ai cohere rerank model pricing
Enables cost tracking for azure ai cohere rerank models
* fix(litellm_logging.py): fix debug log to be clearer
Closes https://github.com/BerriAI/litellm/issues/5909
* test(test_utils.py): fix test name
* fix(azure_ai/cost_calculator.py): support cost tracking for azure ai rerank models
* fix(azure_ai): fix azure ai base model cost tracking for rerank endpoints
* fix(converse_handler.py): support new llama 3-2 models
Fixes https://github.com/BerriAI/litellm/issues/5901
* fix(litellm_logging.py): ensure response is redacted for standard message logging
Fixes https://github.com/BerriAI/litellm/issues/5890#issuecomment-2378242360
* fix(cost_calculator.py): use 'get_model_info' for cohere rerank cost calculation
allows user to set custom cost for model
* fix(config.yml): fix docker hub auht
* build(config.yml): add docker auth to all tests
* fix(db/create_views.py): fix linting error
* fix(main.py): fix circular import
* fix(azure_ai/__init__.py): fix circular import
* fix(main.py): fix import
* fix: fix linting errors
* test: fix test
* fix(proxy_server.py): pass premium user value on startup
used for prometheus init
---------
Co-authored-by: Cole Murray <colemurray.cs@gmail.com>
Co-authored-by: bravomark <62681807+bravomark@users.noreply.github.com>
* handle streaming for azure ai studio error
* [Perf Proxy] parallel request limiter - use one cache update call (#5932)
* fix parallel request limiter - use one cache update call
* ci/cd run again
* run ci/cd again
* use docker username password
* fix config.yml
* fix config
* fix config
* fix config.yml
* ci/cd run again
* use correct typing for batch set cache
* fix async_set_cache_pipeline
* fix only check user id tpm / rpm limits when limits set
* fix test_openai_azure_embedding_with_oidc_and_cf
* test: fix test
* test(test_rerank.py): fix test
---------
Co-authored-by: Cole Murray <colemurray.cs@gmail.com>
Co-authored-by: bravomark <62681807+bravomark@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842)
* feat(auth_utils.py): enable admin to allow client-side credentials to be passed
Makes it easier for devs to experiment with finetuned fireworks ai models
* feat(router.py): allow setting configurable_clientside_auth_params for a model
Closes https://github.com/BerriAI/litellm/issues/5843
* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit
Fixes https://github.com/BerriAI/litellm/issues/5850
* fix(azure_ai/): support content list for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4237
* fix(litellm_logging.py): always set saved_cache_cost
Set to 0 by default
* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing
handles calling 405b+ size models
* fix(slack_alerting.py): fix error alerting for failed spend tracking
Fixes regression with slack alerting error monitoring
* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error
* docs(bedrock.md): add llama3-1 models
* test: fix tests
* fix(azure_ai/chat): fix transformation for azure ai calls
* feat(litellm_logging.py): update standard logging payload to include debug information for cost failures
Also includes fixes for cohere rerank cost tracking + databricks llama2 model cost tracking
Easier to repro cost failures and improve reliability in prod
* fix(proxy_server.py): emit cost failure debug info for slack alerting
Improves debug information for cost tracking failures, on slack alerting
* fix(proxy_server.py): use default azure credentials to support azure non-client secret kms
* fix(langsmith.py): raise error if credentials missing
* feat(langsmith.py): support error logging for langsmith + standard logging payload
Fixes https://github.com/BerriAI/litellm/issues/5738
* Fix hardcoding of schema in view check (#5749)
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* test(test_router_debug_logs.py): move to mock response
* Fix hardcoding of schema
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* fix(proxy_server.py): allow admin to disable ui via `DISABLE_ADMIN_UI` flag
* fix(router.py): fix default model name value
Fixes 55db19a1e4 (r1763712148)
* fix(utils.py): fix unbound variable error
* feat(rerank/main.py): add azure ai rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/5667
* feat(secret_detection.py): Allow configuring secret detection params
Allows admin to control what plugins to run for secret detection. Prevents overzealous secret detection.
* docs(secret_detection.md): add secret detection guardrail docs
* fix: fix linting errors
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* Litellm fix router testing (#5748)
* test: fix testing - azure changed content policy error logic
* test: fix tests to use mock responses
* test(test_image_generation.py): handle api instability
* test(test_image_generation.py): handle azure api instability
* fix(utils.py): fix unbounded variable error
* fix(utils.py): fix unbounded variable error
* test: refactor test to use mock response
* test: mark flaky azure tests
* Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard (#5753)
Bumps [next](https://github.com/vercel/next.js) from 14.1.1 to 14.2.10.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.1.1...v14.2.10)
---
updated-dependencies:
- dependency-name: next
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* [Fix] o1-mini causes pydantic warnings on `reasoning_tokens` (#5754)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details
* [Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog (#5750)
* dd - start tracking redis status on dd
* add async_service_succes_hook / failure hook in custom logger
* add async_service_failure_hook
* log service failures on dd
* fix import error
* add test for redis errors / warning
* [Fix] Router/ Proxy - Tag Based routing, raise correct error when no deployments found and tag filtering is on (#5745)
* fix tag routing - raise correct error when no model with tag based routing
* fix error string from tag based routing
* test router tag based routing
* raise 401 error when no tags avialable for deploymen
* linting fix
* [Feat] Log Request metadata on gcs bucket logging (#5743)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* fix(litellm_logging.py): fix logging message
* fix(rerank_api/main.py): fix linting errors
* fix(custom_guardrails.py): maintain backwards compatibility for older guardrails
* fix(rerank_api/main.py): fix cost tracking for rerank endpoints
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: steffen-sbt <148480574+steffen-sbt@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details
* fix(caching.py): set ttl for async_increment cache
fixes issue where ttl for redis client was not being set on increment_cache
Fixes https://github.com/BerriAI/litellm/issues/5609
* fix(caching.py): fix increment cache w/ ttl for sync increment cache on redis
Fixes https://github.com/BerriAI/litellm/issues/5609
* fix(router.py): support adding retry policy + allowed fails policy via config.yaml
* fix(router.py): don't cooldown single deployments
No point, as there's no other deployment to loadbalance with.
* fix(user_api_key_auth.py): support setting allowed email domains on jwt tokens
Closes https://github.com/BerriAI/litellm/issues/5605
* docs(token_auth.md): add user upsert + allowed email domain to jwt auth docs
* fix(litellm_pre_call_utils.py): fix dynamic key logging when team id is set
Fixes issue where key logging would not be set if team metadata was not none
* fix(secret_managers/main.py): load environment variables correctly
Fixes issue where os.environ/ was not being loaded correctly
* test(test_router.py): fix test
* feat(spend_tracking_utils.py): support logging additional usage params - e.g. prompt caching values for deepseek
* test: fix tests
* test: fix test
* test: fix test
* test: fix test
* test: fix test
* fix(cost_calculator.py): move to debug for noisy warning message on cost calculation error
Fixes https://github.com/BerriAI/litellm/issues/5610
* fix(databricks/cost_calculator.py): Handles model name issues for databricks models
* fix(main.py): fix stream chunk builder for multiple tool calls
Fixes https://github.com/BerriAI/litellm/issues/5591
* fix: correctly set user_alias when passed in
Fixes https://github.com/BerriAI/litellm/issues/5612
* fix(types/utils.py): allow passing role for message object
https://github.com/BerriAI/litellm/issues/5621
* fix(litellm_logging.py): Fix langfuse logging across multiple projects
Fixes issue where langfuse logger was re-using the old logging object
* feat(proxy/_types.py): support adding key-based tags for tag-based routing
Enable tag based routing at key-level
* fix(proxy/_types.py): fix inheritance
* test(test_key_generate_prisma.py): fix test
* test: fix test
* fix(litellm_logging.py): return used callback object
* fix(utils.py): return citations for perplexity streaming
Fixes https://github.com/BerriAI/litellm/issues/5535
* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542)
* fix(anthropic/chat.py): support fallbacks for anthropic streaming
Fixes https://github.com/BerriAI/litellm/issues/5512
* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)
* fix: fix linting errors
* fix(http_handler.py): fix raise_for_status error handling
* test: retry flaky test
* fix otel type
* fix(bedrock/embed): fix error raising
* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded
* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539)
* fix(router.py): support returning model_alias model names in `/v1/models`
* fix(proxy_server.py): support returning model alias'es on `/model/info`
* feat(router.py): support returning model group alias for `/model_group/info`
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fix linting errors
* build(model_prices_and_context_window.json): add amazon titan text premier pricing information
Closes https://github.com/BerriAI/litellm/issues/5560
* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3
* fix(success_handler.py): fix linting error
* fix(success_handler.py): fix linting errors
* fix(team_endpoints.py): Allows admin to update team member budgets
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints
* test(test_streaming.py): skip model due to end of life
* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits
Closes https://github.com/BerriAI/litellm/issues/4096
* refactor(bedrock): initial commit to refactor bedrock to a folder
Improve code readability + maintainability
* refactor: more refactor work
* fix: fix imports
* feat(bedrock/embeddings.py): support translating embedding into amazon embedding formats
* fix: fix linting errors
* test: skip test on end of life model
* fix(cohere/embed.py): fix linting error
* fix(cohere/embed.py): fix typing
* fix(cohere/embed.py): fix post-call logging for cohere embedding call
* test(test_embeddings.py): fix error message assertion in test