* feat(proxy_cli.py): add new 'log_config' cli param
Allows passing logging.conf to uvicorn on startup
* docs(cli.md): add logging conf to uvicorn cli docs
* fix(get_llm_provider_logic.py): fix default api base for litellm_proxy
Fixes https://github.com/BerriAI/litellm/issues/6332
* feat(openai_like/embedding): Add support for jina ai embeddings
Closes https://github.com/BerriAI/litellm/issues/6337
* docs(deploy.md): update entrypoint.sh filepath post-refactor
Fixes outdated docs
* feat(prometheus.py): emit time_to_first_token metric on prometheus
Closes https://github.com/BerriAI/litellm/issues/6334
* fix(prometheus.py): only emit time to first token metric if stream is True
enables more accurate ttft usage
* test: handle vertex api instability
* fix(get_llm_provider_logic.py): fix import
* fix(openai.py): fix deepinfra default api base
* fix(anthropic/transformation.py): remove anthropic beta header (#6361)
* fix get_response_headers
* unit testing for get headers
* unit testing for anthropic / azure openai headers
* increase test coverage for test_completion_response_ratelimit_headers
* fix test rate limit headers
* refactor(main.py): streaming_chunk_builder
use <100 lines of code
refactor each component into a separate function - easier to maintain + test
* fix(utils.py): handle choices being None
openai pydantic schema updated
* fix(main.py): fix linting error
* feat(streaming_chunk_builder_utils.py): update stream chunk builder to support rebuilding audio chunks from openai
* test(test_custom_callback_input.py): test message redaction works for audio output
* fix(streaming_chunk_builder_utils.py): return anthropic token usage info directly
* fix(stream_chunk_builder_utils.py): run validation check before entering chunk processor
* fix(main.py): fix import
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* feat(azure/realtime): initial working commit for proxy azure openai realtime endpoint support
Adds support for passing /v1/realtime calls via litellm proxy
* feat(realtime_api/main.py): abstraction for handling openai realtime api calls
* feat(router.py): add `arealtime()` endpoint in router for realtime api calls
Allows using `model_list` in proxy for realtime as well
* fix: make realtime api a private function
Structure might change based on feedback. Make that clear to users.
* build(requirements.txt): add websockets to the requirements.txt
* feat(openai/realtime): add openai /v1/realtime api support
* fix(factory.py): bedrock: merge consecutive tool + user messages
Fixes https://github.com/BerriAI/litellm/issues/6007
* LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023)
* feat(together_ai/completion): handle together ai completion calls
* fix: handle list of int / list of list of int for text completion calls
* fix(utils.py): check if base model in bedrock converse model list
Fixes https://github.com/BerriAI/litellm/issues/6003
* test(test_optional_params.py): add unit tests for bedrock optional param mapping
Fixes https://github.com/BerriAI/litellm/issues/6003
* feat(utils.py): enable passing dummy tool call for anthropic/bedrock calls if tool_use blocks exist
Fixes https://github.com/BerriAI/litellm/issues/5388
* fixed an issue with tool use of claude models with anthropic and bedrock (#6013)
* fix(utils.py): handle empty schema for anthropic/bedrock
Fixes https://github.com/BerriAI/litellm/issues/6012
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting errors
* fix(proxy_cli.py): fix import route for app + health checks path (#6026)
* (testing): Enable testing us.anthropic.claude-3-haiku-20240307-v1:0. (#6018)
* fix(proxy_cli.py): fix import route for app + health checks gettsburg.wav
Fixes https://github.com/BerriAI/litellm/issues/5999
---------
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
---------
Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
* fix(factory.py): correctly handle content in tool block
---------
Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
* init litellm langfuse / gcs credentials in litellm logging obj
* add gcs key based test
* rename vars
* save standard_callback_dynamic_params in model call details
* add working gcs bucket key based logging
* test_basic_gcs_logging_per_request
* linting fix
* add doc on gcs bucket team based logging
* track api key and team in prom latency metric
* add test for latency metric
* test prometheus success metrics for latency
* track team and key labels for deployment failures
* add test for litellm_deployment_failure_responses_total
* fix checks for premium user on prometheus
* log_success_fallback_event and log_failure_fallback_event
* log original_exception in log_success_fallback_event
* track key, team and exception status and class on fallback metrics
* use get_standard_logging_metadata
* fix import error
* track litellm_deployment_successful_fallbacks
* add test test_proxy_fallback_metrics
* add log log_success_fallback_event
* fix test prometheus
* LiteLLM Minor Fixes & Improvements (09/26/2024) (#5925)
* fix(litellm_logging.py): don't initialize prometheus_logger if non premium user
Prevents bad error messages in logs
Fixes https://github.com/BerriAI/litellm/issues/5897
* Add Support for Custom Providers in Vision and Function Call Utils (#5688)
* Add Support for Custom Providers in Vision and Function Call Utils Lookup
* Remove parallel function call due to missing model info param
* Add Unit Tests for Vision and Function Call Changes
* fix-#5920: set header value to string to fix "'int' object has no att… (#5922)
* LiteLLM Minor Fixes & Improvements (09/24/2024) (#5880)
* LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842)
* feat(auth_utils.py): enable admin to allow client-side credentials to be passed
Makes it easier for devs to experiment with finetuned fireworks ai models
* feat(router.py): allow setting configurable_clientside_auth_params for a model
Closes https://github.com/BerriAI/litellm/issues/5843
* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit
Fixes https://github.com/BerriAI/litellm/issues/5850
* fix(azure_ai/): support content list for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4237
* fix(litellm_logging.py): always set saved_cache_cost
Set to 0 by default
* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing
handles calling 405b+ size models
* fix(slack_alerting.py): fix error alerting for failed spend tracking
Fixes regression with slack alerting error monitoring
* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error
* docs(bedrock.md): add llama3-1 models
* test: fix tests
* fix(azure_ai/chat): fix transformation for azure ai calls
* feat(azure_ai/embed): Add azure ai embeddings support
Closes https://github.com/BerriAI/litellm/issues/5861
* fix(azure_ai/embed): enable async embedding
* feat(azure_ai/embed): support azure ai multimodal embeddings
* fix(azure_ai/embed): support async multi modal embeddings
* feat(together_ai/embed): support together ai embedding calls
* feat(rerank/main.py): log source documents for rerank endpoints to langfuse
improves rerank endpoint logging
* fix(langfuse.py): support logging `/audio/speech` input to langfuse
* test(test_embedding.py): fix test
* test(test_completion_cost.py): fix helper util
* fix-#5920: set header value to string to fix "'int' object has no attribute 'encode'"
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* Revert "fix-#5920: set header value to string to fix "'int' object has no att…" (#5926)
This reverts commit a554ae2695.
* build(model_prices_and_context_window.json): add azure ai cohere rerank model pricing
Enables cost tracking for azure ai cohere rerank models
* fix(litellm_logging.py): fix debug log to be clearer
Closes https://github.com/BerriAI/litellm/issues/5909
* test(test_utils.py): fix test name
* fix(azure_ai/cost_calculator.py): support cost tracking for azure ai rerank models
* fix(azure_ai): fix azure ai base model cost tracking for rerank endpoints
* fix(converse_handler.py): support new llama 3-2 models
Fixes https://github.com/BerriAI/litellm/issues/5901
* fix(litellm_logging.py): ensure response is redacted for standard message logging
Fixes https://github.com/BerriAI/litellm/issues/5890#issuecomment-2378242360
* fix(cost_calculator.py): use 'get_model_info' for cohere rerank cost calculation
allows user to set custom cost for model
* fix(config.yml): fix docker hub auht
* build(config.yml): add docker auth to all tests
* fix(db/create_views.py): fix linting error
* fix(main.py): fix circular import
* fix(azure_ai/__init__.py): fix circular import
* fix(main.py): fix import
* fix: fix linting errors
* test: fix test
* fix(proxy_server.py): pass premium user value on startup
used for prometheus init
---------
Co-authored-by: Cole Murray <colemurray.cs@gmail.com>
Co-authored-by: bravomark <62681807+bravomark@users.noreply.github.com>
* handle streaming for azure ai studio error
* [Perf Proxy] parallel request limiter - use one cache update call (#5932)
* fix parallel request limiter - use one cache update call
* ci/cd run again
* run ci/cd again
* use docker username password
* fix config.yml
* fix config
* fix config
* fix config.yml
* ci/cd run again
* use correct typing for batch set cache
* fix async_set_cache_pipeline
* fix only check user id tpm / rpm limits when limits set
* fix test_openai_azure_embedding_with_oidc_and_cf
* test: fix test
* test(test_rerank.py): fix test
---------
Co-authored-by: Cole Murray <colemurray.cs@gmail.com>
Co-authored-by: bravomark <62681807+bravomark@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842)
* feat(auth_utils.py): enable admin to allow client-side credentials to be passed
Makes it easier for devs to experiment with finetuned fireworks ai models
* feat(router.py): allow setting configurable_clientside_auth_params for a model
Closes https://github.com/BerriAI/litellm/issues/5843
* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit
Fixes https://github.com/BerriAI/litellm/issues/5850
* fix(azure_ai/): support content list for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4237
* fix(litellm_logging.py): always set saved_cache_cost
Set to 0 by default
* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing
handles calling 405b+ size models
* fix(slack_alerting.py): fix error alerting for failed spend tracking
Fixes regression with slack alerting error monitoring
* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error
* docs(bedrock.md): add llama3-1 models
* test: fix tests
* fix(azure_ai/chat): fix transformation for azure ai calls
* feat(litellm_logging.py): update standard logging payload to include debug information for cost failures
Also includes fixes for cohere rerank cost tracking + databricks llama2 model cost tracking
Easier to repro cost failures and improve reliability in prod
* fix(proxy_server.py): emit cost failure debug info for slack alerting
Improves debug information for cost tracking failures, on slack alerting
* fix(vertex_llm_base.py): Handle api_base = ""
Fixes https://github.com/BerriAI/litellm/issues/5798
* fix(o1_transformation.py): handle stream_options not being supported
https://github.com/BerriAI/litellm/issues/5803
* docs(routing.md): fix docs
Closes https://github.com/BerriAI/litellm/issues/5808
* perf(internal_user_endpoints.py): reduce db calls for getting team_alias for a key
Use the list gotten earlier in `/user/info` endpoint
Reduces ui keys tab load time to 800ms (prev. 28s+)
* feat(proxy_server.py): support CONFIG_FILE_PATH as env var
Closes https://github.com/BerriAI/litellm/issues/5744
* feat(get_llm_provider_logic.py): add `litellm_proxy/` as a known openai-compatible route
simplifies calling litellm proxy
Reduces confusion when calling models on litellm proxy from litellm sdk
* docs(litellm_proxy.md): cleanup docs
* fix(internal_user_endpoints.py): fix pydantic obj
* test(test_key_generate_prisma.py): fix test
* fix(proxy_server.py): use default azure credentials to support azure non-client secret kms
* fix(langsmith.py): raise error if credentials missing
* feat(langsmith.py): support error logging for langsmith + standard logging payload
Fixes https://github.com/BerriAI/litellm/issues/5738
* Fix hardcoding of schema in view check (#5749)
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* test(test_router_debug_logs.py): move to mock response
* Fix hardcoding of schema
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* fix(proxy_server.py): allow admin to disable ui via `DISABLE_ADMIN_UI` flag
* fix(router.py): fix default model name value
Fixes 55db19a1e4 (r1763712148)
* fix(utils.py): fix unbound variable error
* feat(rerank/main.py): add azure ai rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/5667
* feat(secret_detection.py): Allow configuring secret detection params
Allows admin to control what plugins to run for secret detection. Prevents overzealous secret detection.
* docs(secret_detection.md): add secret detection guardrail docs
* fix: fix linting errors
* fix - deal with case when check view exists returns None (#5740)
* Revert "fix - deal with case when check view exists returns None (#5740)" (#5741)
This reverts commit 535228159b.
* Litellm fix router testing (#5748)
* test: fix testing - azure changed content policy error logic
* test: fix tests to use mock responses
* test(test_image_generation.py): handle api instability
* test(test_image_generation.py): handle azure api instability
* fix(utils.py): fix unbounded variable error
* fix(utils.py): fix unbounded variable error
* test: refactor test to use mock response
* test: mark flaky azure tests
* Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard (#5753)
Bumps [next](https://github.com/vercel/next.js) from 14.1.1 to 14.2.10.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.1.1...v14.2.10)
---
updated-dependencies:
- dependency-name: next
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* [Fix] o1-mini causes pydantic warnings on `reasoning_tokens` (#5754)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details
* [Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog (#5750)
* dd - start tracking redis status on dd
* add async_service_succes_hook / failure hook in custom logger
* add async_service_failure_hook
* log service failures on dd
* fix import error
* add test for redis errors / warning
* [Fix] Router/ Proxy - Tag Based routing, raise correct error when no deployments found and tag filtering is on (#5745)
* fix tag routing - raise correct error when no model with tag based routing
* fix error string from tag based routing
* test router tag based routing
* raise 401 error when no tags avialable for deploymen
* linting fix
* [Feat] Log Request metadata on gcs bucket logging (#5743)
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* fix(litellm_logging.py): fix logging message
* fix(rerank_api/main.py): fix linting errors
* fix(custom_guardrails.py): maintain backwards compatibility for older guardrails
* fix(rerank_api/main.py): fix cost tracking for rerank endpoints
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: steffen-sbt <148480574+steffen-sbt@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* add requester_metadata in standard logging payload
* log requester_metadata in metadata
* use StandardLoggingPayload for logging
* docs StandardLoggingPayload
* fix import
* include standard logging object in failure
* add test for requester metadata
* handle completion_tokens_details
* add test for completion_tokens_details