* build(ui/create_key_button.tsx): support adding tags for cost tracking/routing when making key
* LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870)
* feat(pass_through_endpoints/): support logging anthropic/gemini pass through calls to langfuse/s3/etc.
* fix(utils.py): allow disabling end user cost tracking with new param
Allows proxy admin to disable cost tracking for end user - keeps prometheus metrics small
* docs(configs.md): add disable_end_user_cost_tracking reference to docs
* feat(key_management_endpoints.py): add support for restricting access to `/key/generate` by team/proxy level role
Enables admin to restrict key creation, and assign team admins to handle distributing keys
* test(test_key_management.py): add unit testing for personal / team key restriction checks
* docs: add docs on restricting key creation
* docs(finetuned_models.md): add new guide on calling finetuned models
* docs(input.md): cleanup anthropic supported params
Closes https://github.com/BerriAI/litellm/issues/6856
* test(test_embedding.py): add test for passing extra headers via embedding
* feat(cohere/embed): pass client to async embedding
* feat(rerank.py): add `/v1/rerank` if missing for cohere base url
Closes https://github.com/BerriAI/litellm/issues/6844
* fix(main.py): pass extra_headers param to openai
Fixes https://github.com/BerriAI/litellm/issues/6836
* fix(litellm_logging.py): don't disable global callbacks when dynamic callbacks are set
Fixes issue where global callbacks - e.g. prometheus were overriden when langfuse was set dynamically
* fix(handler.py): fix linting error
* fix: fix typing
* build: add conftest to proxy_admin_ui_tests/
* test: fix test
* fix: fix linting errors
* test: fix test
* fix: fix pass through testing
* feat(key_management_endpoints.py): allow proxy_admin to enforce params on key creation
allows admin to force team keys to have tags
* build(ui/): show teams in leftnav + allow team admin to add new members
* build(ui/): show created tags in dropdown
makes it easier for admin to add tags to keys
* test(test_key_management.py): fix test
* test: fix test
* fix playwright e2e ui test
* fix e2e ui testing deps
* fix: fix linting errors
* fix e2e ui testing
* fix e2e ui testing, only run e2e ui testing in playwright
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* feat(pass_through_endpoints/): support logging anthropic/gemini pass through calls to langfuse/s3/etc.
* fix(utils.py): allow disabling end user cost tracking with new param
Allows proxy admin to disable cost tracking for end user - keeps prometheus metrics small
* docs(configs.md): add disable_end_user_cost_tracking reference to docs
* feat(key_management_endpoints.py): add support for restricting access to `/key/generate` by team/proxy level role
Enables admin to restrict key creation, and assign team admins to handle distributing keys
* test(test_key_management.py): add unit testing for personal / team key restriction checks
* docs: add docs on restricting key creation
* docs(finetuned_models.md): add new guide on calling finetuned models
* docs(input.md): cleanup anthropic supported params
Closes https://github.com/BerriAI/litellm/issues/6856
* test(test_embedding.py): add test for passing extra headers via embedding
* feat(cohere/embed): pass client to async embedding
* feat(rerank.py): add `/v1/rerank` if missing for cohere base url
Closes https://github.com/BerriAI/litellm/issues/6844
* fix(main.py): pass extra_headers param to openai
Fixes https://github.com/BerriAI/litellm/issues/6836
* fix(litellm_logging.py): don't disable global callbacks when dynamic callbacks are set
Fixes issue where global callbacks - e.g. prometheus were overriden when langfuse was set dynamically
* fix(handler.py): fix linting error
* fix: fix typing
* build: add conftest to proxy_admin_ui_tests/
* test: fix test
* fix: fix linting errors
* test: fix test
* fix: fix pass through testing
* add langsmith_api_key to StandardCallbackDynamicParams
* create a file for langsmith types
* langsmith add key / team based logging
* add key based logging for langsmith
* fix langsmith key based logging
* fix linting langsmith
* remove NOQA violation
* add unit test coverage for all helpers in test langsmith
* test_langsmith_key_based_logging
* docs langsmith key based logging
* run langsmith tests in logging callback tests
* fix logging testing
* test_langsmith_key_based_logging
* test_add_callback_via_key_litellm_pre_call_utils_langsmith
* add debug statement langsmith key based logging
* test_langsmith_key_based_logging
* use CustomBatchLogger for GCS
* add GCS bucket logging type
* use batch logging for GCs bucket
* add gcs_bucket
* allow setting flush_interval on CustomBatchLogger
* set GCS_FLUSH_INTERVAL to 1s
* fix test_key_logging
* fix test_key_logging
* add docs on new env vars
* fix use helper for _handle_failed_db_connection_for_get_key_object
* track ALLOW_FAILED_DB_REQUESTS on prometheus
* fix allow_failed_db_requests check
* fix allow_requests_on_db_unavailable
* fix allow_requests_on_db_unavailable
* docs allow_requests_on_db_unavailable
* identify user_id as litellm_proxy_admin_name when DB is failing
* test_handle_failed_db_connection
* fix test_user_api_key_auth_db_unavailable
* update best practices for prod doc
* update best practices for prod
* fix handle db failure
* fix(core_helpers.py): return None, instead of raising kwargs is None error
Closes https://github.com/BerriAI/litellm/issues/6500
* docs(cost_tracking.md): cleanup doc
* fix(vertex_and_google_ai_studio.py): handle function call with no params passed in
Closes https://github.com/BerriAI/litellm/issues/6495
* test(test_router_timeout.py): add test for router timeout + retry logic
* test: update test to use module level values
* (fix) Prometheus - Log Postgres DB latency, status on prometheus (#6484)
* fix logging DB fails on prometheus
* unit testing log to otel wrapper
* unit testing for service logger + prometheus
* use LATENCY buckets for service logging
* fix service logging
* docs clarify vertex vs gemini
* (router_strategy/) ensure all async functions use async cache methods (#6489)
* fix router strat
* use async set / get cache in router_strategy
* add coverage for router strategy
* fix imports
* fix batch_get_cache
* use async methods for least busy
* fix least busy use async methods
* fix test_dual_cache_increment
* test async_get_available_deployment when routing_strategy="least-busy"
* (fix) proxy - fix when `STORE_MODEL_IN_DB` should be set (#6492)
* set store_model_in_db at the top
* correctly use store_model_in_db global
* (fix) `PrometheusServicesLogger` `_get_metric` should return metric in Registry (#6486)
* fix logging DB fails on prometheus
* unit testing log to otel wrapper
* unit testing for service logger + prometheus
* use LATENCY buckets for service logging
* fix service logging
* fix _get_metric in prom services logger
* add clear doc string
* unit testing for prom service logger
* bump: version 1.51.0 → 1.51.1
* Add `azure/gpt-4o-mini-2024-07-18` to model_prices_and_context_window.json (#6477)
* Update utils.py (#6468)
Fixed missing keys
* (perf) Litellm redis router fix - ~100ms improvement (#6483)
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* refactor: pass parent_otel_span for redis caching calls in router
allows for more observability into what calls are causing latency issues
* test: update tests with new params
* refactor: ensure e2e otel tracing for router
* refactor(router.py): add more otel tracing acrosss router
catch all latency issues for router requests
* fix: fix linting error
* fix(router.py): fix linting error
* fix: fix test
* test: fix tests
* fix(dual_cache.py): pass ttl to redis cache
* fix: fix param
* perf(cooldown_cache.py): improve cooldown cache, to store cache results in memory for 5s, prevents redis call from being made on each request
reduces 100ms latency per call with caching enabled on router
* fix: fix test
* fix(cooldown_cache.py): handle if a result is None
* fix(cooldown_cache.py): add debug statements
* refactor(dual_cache.py): move to using an in-memory check for batch get cache, to prevent redis from being hit for every call
* fix(cooldown_cache.py): fix linting erropr
* refactor(prometheus.py): move to using standard logging payload for reading the remaining request / tokens
Ensures prometheus token tracking works for anthropic as well
* fix: fix linting error
* fix(redis_cache.py): make sure ttl is always int (handle float values)
Fixes issue where redis_client.ex was not working correctly due to float ttl
* fix: fix linting error
* test: update test
* fix: fix linting error
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
Co-authored-by: vibhanshu-ob <115142120+vibhanshu-ob@users.noreply.github.com>
* add type for dd llm obs request ob
* working dd llm obs
* datadog use well defined type
* clean up
* unit test test_create_llm_obs_payload
* fix linting
* add datadog_llm_observability
* add datadog_llm_observability
* docs DD LLM obs
* run testing again
* document DD_ENV
* test_create_llm_obs_payload
* fix(utils.py): support passing dynamic api base to validate_environment
Returns True if just api base is required and api base is passed
* fix(litellm_pre_call_utils.py): feature flag sending client headers to llm api
Fixes https://github.com/BerriAI/litellm/issues/6410
* fix(anthropic/chat/transformation.py): return correct error message
* fix(http_handler.py): add error response text in places where we expect it
* fix(factory.py): handle base case of no non-system messages to bedrock
Fixes https://github.com/BerriAI/litellm/issues/6411
* feat(cohere/embed): Support cohere image embeddings
Closes https://github.com/BerriAI/litellm/issues/6413
* fix(__init__.py): fix linting error
* docs(supported_embedding.md): add image embedding example to docs
* feat(cohere/embed): use cohere embedding returned usage for cost calc
* build(model_prices_and_context_window.json): add embed-english-v3.0 details (image cost + 'supports_image_input' flag)
* fix(cohere_transformation.py): fix linting error
* test(test_proxy_server.py): cleanup test
* test: cleanup test
* fix: fix linting errors
* docs(bedrock.md): clarify bedrock auth in litellm docs
* fix(convert_dict_to_response.py): Fixes https://github.com/BerriAI/litellm/issues/6387
* feat(pattern_match_deployments.py): more robust handling for wildcard routes (model_name: custom_route/* -> openai/*)
Enables user to expose custom routes to users with dynamic handling
* test: add more testing
* docs(custom_pricing.md): add debug tutorial for custom pricing
* test: skip codestral test - unreachable backend
* test: fix test
* fix(pattern_matching_deployments.py): fix typing
* test: cleanup codestral tests - backend api unavailable
* (refactor) prometheus async_log_success_event to be under 100 LOC (#6416)
* unit testig for prometheus
* unit testing for success metrics
* use 1 helper for _increment_token_metrics
* use helper for _increment_remaining_budget_metrics
* use _increment_remaining_budget_metrics
* use _increment_top_level_request_and_spend_metrics
* use helper for _set_latency_metrics
* remove noqa violation
* fix test prometheus
* test prometheus
* unit testing for all prometheus helper functions
* fix prom unit tests
* fix unit tests prometheus
* fix unit test prom
* (refactor) router - use static methods for client init utils (#6420)
* use InitalizeOpenAISDKClient
* use InitalizeOpenAISDKClient static method
* fix # noqa: PLR0915
* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend (#6406)
* code cleanup remove unused and undocumented code files
* fix unused logging integrations cleanup
* bump: version 1.50.3 → 1.50.4
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* arize use helper for get_arize_opentelemetry_config
* use helper to get Arize OTEL config
* arize add helpers for arize
* docs allow using arize http endpoint
* fix importing OTEL for Arize
* use static methods for ArizeLogger
* fix ArizeLogger tests
* feat(proxy_cli.py): add new 'log_config' cli param
Allows passing logging.conf to uvicorn on startup
* docs(cli.md): add logging conf to uvicorn cli docs
* fix(get_llm_provider_logic.py): fix default api base for litellm_proxy
Fixes https://github.com/BerriAI/litellm/issues/6332
* feat(openai_like/embedding): Add support for jina ai embeddings
Closes https://github.com/BerriAI/litellm/issues/6337
* docs(deploy.md): update entrypoint.sh filepath post-refactor
Fixes outdated docs
* feat(prometheus.py): emit time_to_first_token metric on prometheus
Closes https://github.com/BerriAI/litellm/issues/6334
* fix(prometheus.py): only emit time to first token metric if stream is True
enables more accurate ttft usage
* test: handle vertex api instability
* fix(get_llm_provider_logic.py): fix import
* fix(openai.py): fix deepinfra default api base
* fix(anthropic/transformation.py): remove anthropic beta header (#6361)
* feat(custom_logger.py): expose new `async_dataset_hook` for modifying/rejecting argilla items before logging
Allows user more control on what gets logged to argilla for annotations
* feat(google_ai_studio_endpoints.py): add new `/azure/*` pass through route
enables pass-through for azure provider
* feat(utils.py): support checking ollama `/api/show` endpoint for retrieving ollama model info
Fixes https://github.com/BerriAI/litellm/issues/6322
* fix(user_api_key_auth.py): add `/key/delete` to an allowed_ui_routes
Fixes https://github.com/BerriAI/litellm/issues/6236
* fix(user_api_key_auth.py): remove type ignore
* fix(user_api_key_auth.py): route ui vs. api token checks differently
Fixes https://github.com/BerriAI/litellm/issues/6238
* feat(internal_user_endpoints.py): support setting models as a default internal user param
Closes https://github.com/BerriAI/litellm/issues/6239
* fix(user_api_key_auth.py): fix exception string
* fix(user_api_key_auth.py): fix error string
* fix: fix test