* fix azure exceptions
* test_bad_request_error_contains_httpx_response
* test_bad_request_error_contains_httpx_response
* use safe access to get exception response
* fix get attr
* fix(ollama.py): fix get model info request
Fixes https://github.com/BerriAI/litellm/issues/6703
* feat(anthropic/chat/transformation.py): support passing user id to anthropic via openai 'user' param
* docs(anthropic.md): document all supported openai params for anthropic
* test: fix tests
* fix: fix tests
* feat(jina_ai/): add rerank support
Closes https://github.com/BerriAI/litellm/issues/6691
* test: handle service unavailable error
* fix(handler.py): refactor together ai rerank call
* test: update test to handle overloaded error
* test: fix test
* Litellm router trace (#6742)
* feat(router.py): add trace_id to parent functions - allows tracking retry/fallbacks
* feat(router.py): log trace id across retry/fallback logic
allows grouping llm logs for the same request
* test: fix tests
* fix: fix test
* fix(transformation.py): only set non-none stop_sequences
* Litellm router disable fallbacks (#6743)
* bump: version 1.52.6 → 1.52.7
* feat(router.py): enable dynamically disabling fallbacks
Allows for enabling/disabling fallbacks per key
* feat(litellm_pre_call_utils.py): support setting 'disable_fallbacks' on litellm key
* test: fix test
* fix(exception_mapping_utils.py): map 'model is overloaded' to internal server error
* test: handle gemini error
* test: fix test
* fix: new run
* fix(caching): convert arg to equivalent kwargs in llm caching handler
prevent unexpected errors
* fix(caching_handler.py): don't pass args to caching
* fix(caching): remove all *args from caching.py
* fix(caching): consistent function signatures + abc method
* test(caching_unit_tests.py): add unit tests for llm caching
ensures coverage for common caching scenarios across different implementations
* refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one
* fix(router.py): drop redis password requirement
* fix(proxy_server.py): fix faulty slack alerting check
* fix(langfuse.py): avoid copying functions/thread lock objects in metadata
fixes metadata copy error when parent otel span in metadata
* test: update test
* fix(streaming_handler.py): save finish_reasons which might show up mid-stream (store last received one)
Fixes https://github.com/BerriAI/litellm/issues/6104
* refactor: add readme to litellm_core_utils/
make it easier to navigate
* fix(team_endpoints.py): return team id + object for invalid team in `/team/list`
* fix(streaming_handler.py): remove import
* fix(pattern_match_deployments.py): default to user input if unable to map based on wildcards (#6646)
* fix(pattern_match_deployments.py): default to user input if unable to… (#6632)
* fix(pattern_match_deployments.py): default to user input if unable to map based on wildcards
* test: fix test
* test: reset test name
* test: update conftest to reload proxy server module between tests
* ci(config.yml): move langfuse out of local_testing
reduce ci/cd time
* ci(config.yml): cleanup langfuse ci/cd tests
* fix: update test to not use global proxy_server app module
* ci: move caching to a separate test pipeline
speed up ci pipeline
* test: update conftest to check if proxy_server attr exists before reloading
* build(conftest.py): don't block on inability to reload proxy_server
* ci(config.yml): update caching unit test filter to work on 'cache' keyword as well
* fix(encrypt_decrypt_utils.py): use function to get salt key
* test: mark flaky test
* test: handle anthropic overloaded errors
* refactor: create separate ci/cd pipeline for proxy unit tests
make ci/cd faster
* ci(config.yml): add litellm_proxy_unit_testing to build_and_test jobs
* ci(config.yml): generate prisma binaries for proxy unit tests
* test: readd vertex_key.json
* ci(config.yml): remove `-s` from proxy_unit_test cmd
speed up test
* ci: remove any 'debug' logging flag
speed up ci pipeline
* test: fix test
* test(test_braintrust.py): rerun
* test: add delay for braintrust test
* chore: comment for maritalk (#6607)
* Update gpt-4o-2024-08-06, and o1-preview, o1-mini models in model cost map (#6654)
* Adding supports_response_schema to gpt-4o-2024-08-06 models
* o1 models do not support vision
---------
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
* (QOL improvement) add unit testing for all static_methods in litellm_logging.py (#6640)
* add unit testing for standard logging payload
* unit testing for static methods in litellm_logging
* add code coverage check for litellm_logging
* litellm_logging_code_coverage
* test_get_final_response_obj
* fix validate_redacted_message_span_attributes
* test validate_redacted_message_span_attributes
* (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook (#6650)
* log error on prometheus service failure hook
* use a more accurate function name for wrapper that handles logging db metrics
* fix log_db_metrics
* test_log_db_metrics_failure_error_types
* fix linting
* fix auth checks
* Update several Azure AI models in model cost map (#6655)
* Adding Azure Phi 3/3.5 models to model cost map
* Update gpt-4o-mini models
* Adding missing Azure Mistral models to model cost map
* Adding Azure Llama3.2 models to model cost map
* Fix Gemini-1.5-flash pricing
* Fix Gemini-1.5-flash output pricing
* Fix Gemini-1.5-pro prices
* Fix Gemini-1.5-flash output prices
* Correct gemini-1.5-pro prices
* Correction on Vertex Llama3.2 entry
---------
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
* fix(streaming_handler.py): fix linting error
* test: remove duplicate test
causes gemini ratelimit error
---------
Co-authored-by: nobuo kawasaki <nobu007@users.noreply.github.com>
Co-authored-by: Emerson Gomes <emerson.gomes@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* add unit testing for standard logging payload
* unit testing for static methods in litellm_logging
* add code coverage check for litellm_logging
* litellm_logging_code_coverage
* test_get_final_response_obj
* fix validate_redacted_message_span_attributes
* test validate_redacted_message_span_attributes
* feat: initial commit for watsonx chat endpoint support
Closes https://github.com/BerriAI/litellm/issues/6562
* feat(watsonx/chat/handler.py): support tool calling for watsonx
Closes https://github.com/BerriAI/litellm/issues/6562
* fix(streaming_utils.py): return empty chunk instead of failing if streaming value is invalid dict
ensures streaming works for ibm watsonx
* fix(openai_like/chat/handler.py): ensure asynchttphandler is passed correctly for openai like calls
* fix: ensure exception mapping works well for watsonx calls
* fix(openai_like/chat/handler.py): handle async streaming correctly
* feat(main.py): Make it clear when a user is passing an invalid message
add validation for user content message
Closes https://github.com/BerriAI/litellm/issues/6565
* fix: cleanup
* fix(utils.py): loosen validation check, to just make sure content types are valid
make litellm robust to future content updates
* fix: fix linting erro
* fix: fix linting errors
* fix(utils.py): make validation check more flexible
* test: handle langfuse list index out of range error
* Litellm dev 11 02 2024 (#6561)
* fix(dual_cache.py): update in-memory check for redis batch get cache
Fixes latency delay for async_batch_redis_cache
* fix(service_logger.py): fix race condition causing otel service logging to be overwritten if service_callbacks set
* feat(user_api_key_auth.py): add parent otel component for auth
allows us to isolate how much latency is added by auth checks
* perf(parallel_request_limiter.py): move async_set_cache_pipeline (from max parallel request limiter) out of execution path (background task)
reduces latency by 200ms
* feat(user_api_key_auth.py): have user api key auth object return user tpm/rpm limits - reduces redis calls in downstream task (parallel_request_limiter)
Reduces latency by 400-800ms
* fix(parallel_request_limiter.py): use batch get cache to reduce user/key/team usage object calls
reduces latency by 50-100ms
* fix: fix linting error
* fix(_service_logger.py): fix import
* fix(user_api_key_auth.py): fix service logging
* fix(dual_cache.py): don't pass 'self'
* fix: fix python3.8 error
* fix: fix init]
* bump: version 1.51.4 → 1.51.5
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* Litellm dev 11 02 2024 (#6561)
* fix(dual_cache.py): update in-memory check for redis batch get cache
Fixes latency delay for async_batch_redis_cache
* fix(service_logger.py): fix race condition causing otel service logging to be overwritten if service_callbacks set
* feat(user_api_key_auth.py): add parent otel component for auth
allows us to isolate how much latency is added by auth checks
* perf(parallel_request_limiter.py): move async_set_cache_pipeline (from max parallel request limiter) out of execution path (background task)
reduces latency by 200ms
* feat(user_api_key_auth.py): have user api key auth object return user tpm/rpm limits - reduces redis calls in downstream task (parallel_request_limiter)
Reduces latency by 400-800ms
* fix(parallel_request_limiter.py): use batch get cache to reduce user/key/team usage object calls
reduces latency by 50-100ms
* fix: fix linting error
* fix(_service_logger.py): fix import
* fix(user_api_key_auth.py): fix service logging
* fix(dual_cache.py): don't pass 'self'
* fix: fix python3.8 error
* fix: fix init]
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577)
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check
* fix(lowest_tpm_rpm_v2.py): return headers in correct format
* test: update test
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* test: remove eol model
* fix(proxy_server.py): fix db config loading logic
* fix(proxy_server.py): fix order of config / db updates, to ensure fields not overwritten
* test: skip test if required env var is missing
* test: fix test
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
* test: mark flaky test
* test: handle anthropic api instability
* test: update test
* test: bump num retries on langfuse tests - their api is quite bad
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* fix(core_helpers.py): return None, instead of raising kwargs is None error
Closes https://github.com/BerriAI/litellm/issues/6500
* docs(cost_tracking.md): cleanup doc
* fix(vertex_and_google_ai_studio.py): handle function call with no params passed in
Closes https://github.com/BerriAI/litellm/issues/6495
* test(test_router_timeout.py): add test for router timeout + retry logic
* test: update test to use module level values
* (fix) Prometheus - Log Postgres DB latency, status on prometheus (#6484)
* fix logging DB fails on prometheus
* unit testing log to otel wrapper
* unit testing for service logger + prometheus
* use LATENCY buckets for service logging
* fix service logging
* docs clarify vertex vs gemini
* (router_strategy/) ensure all async functions use async cache methods (#6489)
* fix router strat
* use async set / get cache in router_strategy
* add coverage for router strategy
* fix imports
* fix batch_get_cache
* use async methods for least busy
* fix least busy use async methods
* fix test_dual_cache_increment
* test async_get_available_deployment when routing_strategy="least-busy"
* (fix) proxy - fix when `STORE_MODEL_IN_DB` should be set (#6492)
* set store_model_in_db at the top
* correctly use store_model_in_db global
* (fix) `PrometheusServicesLogger` `_get_metric` should return metric in Registry (#6486)
* fix logging DB fails on prometheus
* unit testing log to otel wrapper
* unit testing for service logger + prometheus
* use LATENCY buckets for service logging
* fix service logging
* fix _get_metric in prom services logger
* add clear doc string
* unit testing for prom service logger
* bump: version 1.51.0 → 1.51.1
* Add `azure/gpt-4o-mini-2024-07-18` to model_prices_and_context_window.json (#6477)
* Update utils.py (#6468)
Fixed missing keys
* (perf) Litellm redis router fix - ~100ms improvement (#6483)
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* refactor: pass parent_otel_span for redis caching calls in router
allows for more observability into what calls are causing latency issues
* test: update tests with new params
* refactor: ensure e2e otel tracing for router
* refactor(router.py): add more otel tracing acrosss router
catch all latency issues for router requests
* fix: fix linting error
* fix(router.py): fix linting error
* fix: fix test
* test: fix tests
* fix(dual_cache.py): pass ttl to redis cache
* fix: fix param
* perf(cooldown_cache.py): improve cooldown cache, to store cache results in memory for 5s, prevents redis call from being made on each request
reduces 100ms latency per call with caching enabled on router
* fix: fix test
* fix(cooldown_cache.py): handle if a result is None
* fix(cooldown_cache.py): add debug statements
* refactor(dual_cache.py): move to using an in-memory check for batch get cache, to prevent redis from being hit for every call
* fix(cooldown_cache.py): fix linting erropr
* refactor(prometheus.py): move to using standard logging payload for reading the remaining request / tokens
Ensures prometheus token tracking works for anthropic as well
* fix: fix linting error
* fix(redis_cache.py): make sure ttl is always int (handle float values)
Fixes issue where redis_client.ex was not working correctly due to float ttl
* fix: fix linting error
* test: update test
* fix: fix linting error
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
Co-authored-by: vibhanshu-ob <115142120+vibhanshu-ob@users.noreply.github.com>
* add type for dd llm obs request ob
* working dd llm obs
* datadog use well defined type
* clean up
* unit test test_create_llm_obs_payload
* fix linting
* add datadog_llm_observability
* add datadog_llm_observability
* docs DD LLM obs
* run testing again
* document DD_ENV
* test_create_llm_obs_payload
* fix(utils.py): support passing dynamic api base to validate_environment
Returns True if just api base is required and api base is passed
* fix(litellm_pre_call_utils.py): feature flag sending client headers to llm api
Fixes https://github.com/BerriAI/litellm/issues/6410
* fix(anthropic/chat/transformation.py): return correct error message
* fix(http_handler.py): add error response text in places where we expect it
* fix(factory.py): handle base case of no non-system messages to bedrock
Fixes https://github.com/BerriAI/litellm/issues/6411
* feat(cohere/embed): Support cohere image embeddings
Closes https://github.com/BerriAI/litellm/issues/6413
* fix(__init__.py): fix linting error
* docs(supported_embedding.md): add image embedding example to docs
* feat(cohere/embed): use cohere embedding returned usage for cost calc
* build(model_prices_and_context_window.json): add embed-english-v3.0 details (image cost + 'supports_image_input' flag)
* fix(cohere_transformation.py): fix linting error
* test(test_proxy_server.py): cleanup test
* test: cleanup test
* fix: fix linting errors
* feat(litellm_pre_call_utils.py): support 'add_user_information_to_llm_headers' param
enables passing user info to backend llm (user request for custom vllm server)
* fix(litellm_logging.py): fix linting error
* feat(proxy_server.py): check if views exist on proxy server startup + refactor startup event logic to <50 LOC
* refactor(redis_cache.py): use a default cache value when writing to r… (#6358)
* refactor(redis_cache.py): use a default cache value when writing to redis
prevent redis from blowing up in high traffic
* refactor(redis_cache.py): refactor all cache writes to use self.get_ttl
ensures default ttl always used when writing to redis
Prevents redis db from blowing up in prod
* feat(proxy_cli.py): add new 'log_config' cli param (#6352)
* feat(proxy_cli.py): add new 'log_config' cli param
Allows passing logging.conf to uvicorn on startup
* docs(cli.md): add logging conf to uvicorn cli docs
* fix(get_llm_provider_logic.py): fix default api base for litellm_proxy
Fixes https://github.com/BerriAI/litellm/issues/6332
* feat(openai_like/embedding): Add support for jina ai embeddings
Closes https://github.com/BerriAI/litellm/issues/6337
* docs(deploy.md): update entrypoint.sh filepath post-refactor
Fixes outdated docs
* feat(prometheus.py): emit time_to_first_token metric on prometheus
Closes https://github.com/BerriAI/litellm/issues/6334
* fix(prometheus.py): only emit time to first token metric if stream is True
enables more accurate ttft usage
* test: handle vertex api instability
* fix(get_llm_provider_logic.py): fix import
* fix(openai.py): fix deepinfra default api base
* fix(anthropic/transformation.py): remove anthropic beta header (#6361)
* docs(sidebars.js): add jina ai embedding to docs
* docs(sidebars.js): add jina ai to left nav
* bump: version 1.50.1 → 1.50.2
* langfuse use helper for get_langfuse_logging_config
* Refactor: apply early return (#6369)
* (refactor) remove berrispendLogger - unused logging integration (#6363)
* fix remove berrispendLogger
* remove unused clickhouse logger
* fix docs configs.md
* (fix) standard logging metadata + add unit testing (#6366)
* fix setting StandardLoggingMetadata
* add unit testing for standard logging metadata
* fix otel logging test
* fix linting
* fix typing
* Revert "(fix) standard logging metadata + add unit testing (#6366)" (#6381)
This reverts commit 8359cb6fa9.
* add new 35 mode lcard (#6378)
* Add claude 3 5 sonnet 20241022 models for all provides (#6380)
* Add Claude 3.5 v2 on Amazon Bedrock and Vertex AI.
* added anthropic/claude-3-5-sonnet-20241022
* add new 35 mode lcard
---------
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: lowjiansheng <15527690+lowjiansheng@users.noreply.github.com>
* test(skip-flaky-google-context-caching-test): google is not reliable. their sample code is also not working
* test(test_alangfuse.py): handle flaky langfuse test better
* (feat) Arize - Allow using Arize HTTP endpoint (#6364)
* arize use helper for get_arize_opentelemetry_config
* use helper to get Arize OTEL config
* arize add helpers for arize
* docs allow using arize http endpoint
* fix importing OTEL for Arize
* use static methods for ArizeLogger
* fix ArizeLogger tests
* Litellm dev 10 22 2024 (#6384)
* fix(utils.py): add 'disallowed_special' for token counting on .encode()
Fixes error when '<
endoftext
>' in string
* Revert "(fix) standard logging metadata + add unit testing (#6366)" (#6381)
This reverts commit 8359cb6fa9.
* add new 35 mode lcard (#6378)
* Add claude 3 5 sonnet 20241022 models for all provides (#6380)
* Add Claude 3.5 v2 on Amazon Bedrock and Vertex AI.
* added anthropic/claude-3-5-sonnet-20241022
* add new 35 mode lcard
---------
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: lowjiansheng <15527690+lowjiansheng@users.noreply.github.com>
* test(skip-flaky-google-context-caching-test): google is not reliable. their sample code is also not working
* Fix metadata being overwritten in speech() (#6295)
* fix: adding missing redis cluster kwargs (#6318)
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
* Add support for `max_completion_tokens` in Azure OpenAI (#6376)
Now that Azure supports `max_completion_tokens`, no need for special handling for this param and let it pass thru. More details: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=python-secure#api-support
* build(model_prices_and_context_window.json): add voyage-finance-2 pricing
Closes https://github.com/BerriAI/litellm/issues/6371
* build(model_prices_and_context_window.json): fix llama3.1 pricing model name on map
Closes https://github.com/BerriAI/litellm/issues/6310
* feat(realtime_streaming.py): just log specific events
Closes https://github.com/BerriAI/litellm/issues/6267
* fix(utils.py): more robust checking if unmapped vertex anthropic model belongs to that family of models
Fixes https://github.com/BerriAI/litellm/issues/6383
* Fix Ollama stream handling for tool calls with None content (#6155)
* test(test_max_completions): update test now that azure supports 'max_completion_tokens'
* fix(handler.py): fix linting error
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: John HU <hszqqq12@gmail.com>
Co-authored-by: Ali Arian <113945203+ali-arian@users.noreply.github.com>
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
Co-authored-by: Anand Taralika <46954145+taralika@users.noreply.github.com>
Co-authored-by: Nolan Tremelling <34580718+NolanTrem@users.noreply.github.com>
* bump: version 1.50.2 → 1.50.3
* build(deps): bump http-proxy-middleware in /docs/my-website (#6395)
Bumps [http-proxy-middleware](https://github.com/chimurai/http-proxy-middleware) from 2.0.6 to 2.0.7.
- [Release notes](https://github.com/chimurai/http-proxy-middleware/releases)
- [Changelog](https://github.com/chimurai/http-proxy-middleware/blob/v2.0.7/CHANGELOG.md)
- [Commits](https://github.com/chimurai/http-proxy-middleware/compare/v2.0.6...v2.0.7)
---
updated-dependencies:
- dependency-name: http-proxy-middleware
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* (docs + testing) Correctly document the timeout value used by litellm proxy is 6000 seconds + add to best practices for prod (#6339)
* fix docs use documented timeout
* document request timeout
* add test for litellm.request_timeout
* add test for checking value of timeout
* (refactor) move convert dict to model response to llm_response_utils/ (#6393)
* refactor move convert dict to model response
* fix imports
* fix import _handle_invalid_parallel_tool_calls
* (refactor) litellm.Router client initialization utils (#6394)
* refactor InitalizeOpenAISDKClient
* use helper func for _should_create_openai_sdk_client_for_model
* use static methods for set client on litellm router
* reduce LOC in _get_client_initialization_params
* fix _should_create_openai_sdk_client_for_model
* code quality fix
* test test_should_create_openai_sdk_client_for_model
* test test_get_client_initialization_params_openai
* fix mypy linting errors
* fix OpenAISDKClientInitializationParams
* test_get_client_initialization_params_all_env_vars
* test_get_client_initialization_params_azure_ai_studio_mistral
* test_get_client_initialization_params_default_values
* fix _get_client_initialization_params
* (fix) Langfuse key based logging (#6372)
* langfuse use helper for get_langfuse_logging_config
* fix get_langfuse_logger_for_request
* fix import
* fix get_langfuse_logger_for_request
* test_get_langfuse_logger_for_request_with_dynamic_params
* unit testing for test_get_langfuse_logger_for_request_with_no_dynamic_params
* parameterized langfuse testing
* fix langfuse test
* fix langfuse logging
* fix test_aaalangfuse_logging_metadata
* fix langfuse log metadata test
* fix langfuse logger
* use create_langfuse_logger_from_credentials
* fix test_get_langfuse_logger_for_request_with_no_dynamic_params
* fix correct langfuse/ folder structure
* use static methods for langfuse logger
* add commment on langfuse handler
* fix linting error
* add unit testing for langfuse logging
* fix linting
* fix failure handler langfuse
* Revert "(refactor) litellm.Router client initialization utils (#6394)" (#6403)
This reverts commit b70147f63b.
* def test_text_completion_with_echo(stream): (#6401)
test
* fix linting - remove # noqa PLR0915 from fixed function
* test: cleanup codestral tests - backend api unavailable
* (refactor) prometheus async_log_success_event to be under 100 LOC (#6416)
* unit testig for prometheus
* unit testing for success metrics
* use 1 helper for _increment_token_metrics
* use helper for _increment_remaining_budget_metrics
* use _increment_remaining_budget_metrics
* use _increment_top_level_request_and_spend_metrics
* use helper for _set_latency_metrics
* remove noqa violation
* fix test prometheus
* test prometheus
* unit testing for all prometheus helper functions
* fix prom unit tests
* fix unit tests prometheus
* fix unit test prom
* (refactor) router - use static methods for client init utils (#6420)
* use InitalizeOpenAISDKClient
* use InitalizeOpenAISDKClient static method
* fix # noqa: PLR0915
* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend (#6406)
* code cleanup remove unused and undocumented code files
* fix unused logging integrations cleanup
* bump: version 1.50.3 → 1.50.4
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Hakan Taşköprü <Haknt@users.noreply.github.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: John HU <hszqqq12@gmail.com>
Co-authored-by: Ali Arian <113945203+ali-arian@users.noreply.github.com>
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
Co-authored-by: Anand Taralika <46954145+taralika@users.noreply.github.com>
Co-authored-by: Nolan Tremelling <34580718+NolanTrem@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(bedrock.md): clarify bedrock auth in litellm docs
* fix(convert_dict_to_response.py): Fixes https://github.com/BerriAI/litellm/issues/6387
* feat(pattern_match_deployments.py): more robust handling for wildcard routes (model_name: custom_route/* -> openai/*)
Enables user to expose custom routes to users with dynamic handling
* test: add more testing
* docs(custom_pricing.md): add debug tutorial for custom pricing
* test: skip codestral test - unreachable backend
* test: fix test
* fix(pattern_matching_deployments.py): fix typing
* test: cleanup codestral tests - backend api unavailable
* (refactor) prometheus async_log_success_event to be under 100 LOC (#6416)
* unit testig for prometheus
* unit testing for success metrics
* use 1 helper for _increment_token_metrics
* use helper for _increment_remaining_budget_metrics
* use _increment_remaining_budget_metrics
* use _increment_top_level_request_and_spend_metrics
* use helper for _set_latency_metrics
* remove noqa violation
* fix test prometheus
* test prometheus
* unit testing for all prometheus helper functions
* fix prom unit tests
* fix unit tests prometheus
* fix unit test prom
* (refactor) router - use static methods for client init utils (#6420)
* use InitalizeOpenAISDKClient
* use InitalizeOpenAISDKClient static method
* fix # noqa: PLR0915
* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend (#6406)
* code cleanup remove unused and undocumented code files
* fix unused logging integrations cleanup
* bump: version 1.50.3 → 1.50.4
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* feat(litellm_logging.py): refactor standard_logging_payload function to be <50 LOC
fixes issue where usage information was not following typed values
* fix(litellm_logging.py): fix completion start time handling
* fix(utils.py): add 'disallowed_special' for token counting on .encode()
Fixes error when '<
endoftext
>' in string
* Revert "(fix) standard logging metadata + add unit testing (#6366)" (#6381)
This reverts commit 8359cb6fa9.
* add new 35 mode lcard (#6378)
* Add claude 3 5 sonnet 20241022 models for all provides (#6380)
* Add Claude 3.5 v2 on Amazon Bedrock and Vertex AI.
* added anthropic/claude-3-5-sonnet-20241022
* add new 35 mode lcard
---------
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: lowjiansheng <15527690+lowjiansheng@users.noreply.github.com>
* test(skip-flaky-google-context-caching-test): google is not reliable. their sample code is also not working
* Fix metadata being overwritten in speech() (#6295)
* fix: adding missing redis cluster kwargs (#6318)
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
* Add support for `max_completion_tokens` in Azure OpenAI (#6376)
Now that Azure supports `max_completion_tokens`, no need for special handling for this param and let it pass thru. More details: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=python-secure#api-support
* build(model_prices_and_context_window.json): add voyage-finance-2 pricing
Closes https://github.com/BerriAI/litellm/issues/6371
* build(model_prices_and_context_window.json): fix llama3.1 pricing model name on map
Closes https://github.com/BerriAI/litellm/issues/6310
* feat(realtime_streaming.py): just log specific events
Closes https://github.com/BerriAI/litellm/issues/6267
* fix(utils.py): more robust checking if unmapped vertex anthropic model belongs to that family of models
Fixes https://github.com/BerriAI/litellm/issues/6383
* Fix Ollama stream handling for tool calls with None content (#6155)
* test(test_max_completions): update test now that azure supports 'max_completion_tokens'
* fix(handler.py): fix linting error
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: John HU <hszqqq12@gmail.com>
Co-authored-by: Ali Arian <113945203+ali-arian@users.noreply.github.com>
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
Co-authored-by: Anand Taralika <46954145+taralika@users.noreply.github.com>
Co-authored-by: Nolan Tremelling <34580718+NolanTrem@users.noreply.github.com>
* arize use helper for get_arize_opentelemetry_config
* use helper to get Arize OTEL config
* arize add helpers for arize
* docs allow using arize http endpoint
* fix importing OTEL for Arize
* use static methods for ArizeLogger
* fix ArizeLogger tests
* feat(proxy_cli.py): add new 'log_config' cli param
Allows passing logging.conf to uvicorn on startup
* docs(cli.md): add logging conf to uvicorn cli docs
* fix(get_llm_provider_logic.py): fix default api base for litellm_proxy
Fixes https://github.com/BerriAI/litellm/issues/6332
* feat(openai_like/embedding): Add support for jina ai embeddings
Closes https://github.com/BerriAI/litellm/issues/6337
* docs(deploy.md): update entrypoint.sh filepath post-refactor
Fixes outdated docs
* feat(prometheus.py): emit time_to_first_token metric on prometheus
Closes https://github.com/BerriAI/litellm/issues/6334
* fix(prometheus.py): only emit time to first token metric if stream is True
enables more accurate ttft usage
* test: handle vertex api instability
* fix(get_llm_provider_logic.py): fix import
* fix(openai.py): fix deepinfra default api base
* fix(anthropic/transformation.py): remove anthropic beta header (#6361)
* fix get_response_headers
* unit testing for get headers
* unit testing for anthropic / azure openai headers
* increase test coverage for test_completion_response_ratelimit_headers
* fix test rate limit headers
* refactor(main.py): streaming_chunk_builder
use <100 lines of code
refactor each component into a separate function - easier to maintain + test
* fix(utils.py): handle choices being None
openai pydantic schema updated
* fix(main.py): fix linting error
* feat(streaming_chunk_builder_utils.py): update stream chunk builder to support rebuilding audio chunks from openai
* test(test_custom_callback_input.py): test message redaction works for audio output
* fix(streaming_chunk_builder_utils.py): return anthropic token usage info directly
* fix(stream_chunk_builder_utils.py): run validation check before entering chunk processor
* fix(main.py): fix import
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling