* fix(prometheus.py): support streaming end user litellm_proxy_total_requests_metric tracking
* fix(prometheus.py): add 'requested_model' and 'end_user_id' to 'litellm_request_total_latency_metric_bucket'
enables latency tracking by end user + requested model
* fix(prometheus.py): add end user, user and requested model metrics to 'litellm_llm_api_latency_metric'
* test: update prometheus unit tests
* test(test_prometheus.py): update tests
* test(test_prometheus.py): fix test
* test: reorder test
* build(model_prices_and_context_window.json): add gemini-1.5-flash context caching
* fix(context_caching/transformation.py): just use last identified cache point
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(context_caching/transformation.py): pick first contiguous block - handles system message error from google
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(vertex_ai/gemini/): track context caching tokens
* refactor(gemini/): place transformation.py inside `chat/` folder
make it easy for user to know we support the equivalent endpoint
* fix: fix import
* refactor(vertex_ai/): move vertex_ai cost calc inside vertex_ai/ folder
make it easier to see cost calculation logic
* fix: fix linting errors
* fix: fix circular import
* feat(gemini/cost_calculator.py): support gemini context caching cost calculation
generifies anthropic's cost calculation function and uses it across anthropic + gemini
* build(model_prices_and_context_window.json): add cost tracking for gemini-1.5-flash-002 w/ context caching
Closes https://github.com/BerriAI/litellm/issues/6891
* docs(gemini.md): add gemini context caching architecture diagram
make it easier for user to understand how context caching works
* docs(gemini.md): link to relevant gemini context caching code
* docs(gemini/context_caching): add readme in github, make it easy for dev to know context caching is supported + where to go for code
* fix(llm_cost_calc/utils.py): handle gemini 128k token diff cost calc scenario
* fix(deepseek/cost_calculator.py): support deepseek context caching cost calculation
* test: fix test
* fix(main.py): support 'mock_timeout=true' param
allows mock requests on proxy to have a time delay, for testing
* fix(main.py): ensure mock timeouts raise litellm.Timeout error
triggers retry/fallbacks
* fix: fix fallback + mock timeout testing
* fix(router.py): always return remaining tpm/rpm limits, if limits are known
allows for rate limit headers to be guaranteed
* docs(timeout.md): add docs on mock timeout = true
* fix(main.py): fix linting errors
* test: fix test
* fix(proxy_server.py): enforce team id based model add only works if enterprise user
* fix(auth_checks.py): enforce common_checks can only be imported by user_api_key_auth.py
* fix(auth_checks.py): insert not premium user error message on failed common checks run
* ui fix - allow searching model list + fix bug on filtering
* qa fix - use correct provider name for azure_text
* ui wrap content onto next line
* ui fix - allow selecting current UI session when logging in
* ui session budgets
* ui show provider models on wildcard models
* test provider name appears in model list
* ui fix auto scroll on chat ui tab
* fix(utils.py): e2e azure tts cost tracking working
moves tts response obj to include hidden params (allows for litellm call id, etc. to be sent in response headers) ; fixes spend_Tracking_utils logging payload to account for non-base model use-case
Fixes https://github.com/BerriAI/litellm/issues/7223
* fix: fix linting errors
* build(model_prices_and_context_window.json): add bedrock llama 3.3
Closes https://github.com/BerriAI/litellm/issues/7329
* fix(openai.py): fix return type for sync openai httpx response
* test: update test
* fix(spend_tracking_utils.py): fix if check
* fix(spend_tracking_utils.py): fix if check
* test: improve debugging for test
* fix: fix import
* fix(proxy_track_cost_callback.py): log to db if only end user param given
* fix: allows for jwt-auth based end user id spend tracking to work
* fix(utils.py): fix 'get_end_user_id_for_cost_tracking' to use 'user_api_key_end_user_id'
more stable - works with jwt-auth based end user tracking as well
* test(test_jwt.py): add e2e unit test to confirm end user cost tracking works for spend logs
* test: update test to use end_user api key hash param
* fix(langfuse.py): support end user cost tracking via jwt auth + langfuse
logs end user to langfuse if decoded from jwt token
* fix: fix linting errors
* test: fix test
* test: fix test
* fix: fix end user id extraction
* fix: run test earlier
* feat(router.py): support passing model-specific messages in fallbacks
* docs(routing.md): separate router timeouts into separate doc
allow for 1 fallbacks doc (across proxy/router)
* docs(routing.md): cleanup router docs
* docs(reliability.md): cleanup docs
* docs(reliability.md): cleaned up fallback doc
just have 1 doc across sdk/proxy
simplifies docs
* docs(reliability.md): add setting model-specific fallback prompts
* fix: fix linting errors
* test: skip test causing openai rate limit errros
* test: fix test
* test: run vertex test first to catch error
* fix(proxy_server.py): only update k,v pair if v is not empty/null
Fixes https://github.com/BerriAI/litellm/issues/6787
* test(test_router.py): cleanup duplicate calls
* test: add new test stream options drop params test
* test: update optional params / stream options test to test for vertex ai mistral route specifically
Addresses https://github.com/BerriAI/litellm/issues/7309
* fix(proxy_server.py): fix linting errors
* fix: fix linting errors
* fix(openai.py): fix returning o1 non-streaming requests
fixes issue where fake stream always true for o1
* build(model_prices_and_context_window.json): add 'supports_vision' for o1 models
* fix: add internal server error exception mapping
* fix(base_llm_unit_tests.py): drop temperature from test
* test: mark prompt caching as a flaky test
* fix(health.md): add rerank model health check information
* build(model_prices_and_context_window.json): add gemini 2.0 for google ai studio - pricing + commercial rate limits
* build(model_prices_and_context_window.json): add gemini-2.0 supports audio output = true
* docs(team_model_add.md): clarify allowing teams to add models is an enterprise feature
* fix(o1_transformation.py): add support for 'n', 'response_format' and 'stop' params for o1 and 'stream_options' param for o1-mini
* build(model_prices_and_context_window.json): add 'supports_system_message' to supporting openai models
needed as o1-preview, and o1-mini models don't support 'system message
* fix(o1_transformation.py): translate system message based on if o1 model supports it
* fix(o1_transformation.py): return 'stream' param support if o1-mini/o1-preview
o1 currently doesn't support streaming, but the other model versions do
Fixes https://github.com/BerriAI/litellm/issues/7292
* fix(o1_transformation.py): return tool calling/response_format in supported params if model map says so
Fixes https://github.com/BerriAI/litellm/issues/7292
* fix: fix linting errors
* fix: update '_transform_messages'
* fix(o1_transformation.py): fix provider passed for supported param checks
* test(base_llm_unit_tests.py): skip test if api takes >5s to respond
* fix(utils.py): return false in 'supports_factory' if can't find value
* fix(o1_transformation.py): always return stream + stream_options as supported params + handle stream options being passed in for azure o1
* feat(openai.py): support stream faking natively in openai handler
Allows o1 calls to be faked for just the "o1" model, allows native streaming for o1-mini, o1-preview
Fixes https://github.com/BerriAI/litellm/issues/7292
* fix(openai.py): use inference param instead of original optional param
* fix(hosted_vllm/transformation.py): return fake api key, if none give. Prevents httpx error
Fixes https://github.com/BerriAI/litellm/issues/7291
* test: fix test
* fix(main.py): add hosted_vllm/ support for embeddings endpoint
Closes https://github.com/BerriAI/litellm/issues/7290
* docs(vllm.md): add docs on vllm embeddings usage
* fix(__init__.py): fix sambanova model test
* fix(base_llm_unit_tests.py): skip pydantic obj test if model takes >5s to respond
* fix(proxy_server.py): pass model access groups to get_key/get_team models
allows end user to see actual models they have access to, instead of default models
* fix(auth_checks.py): fix linting errors
* fix: fix linting errors
* fix(factory.py): skip empty text blocks for bedrock user messages
Fixes https://github.com/BerriAI/litellm/issues/7169
* Add support for Gemini 2.0 GoogleSearch tool (#7257)
* Add support for google_search tool in gemini 2.0
* Add/modify tests
* Fix grounding check
* Remove 2.0 grounding test; exclude experimental model in VERTEX_MODELS_TO_NOT_TEST
* Swap order of tools
* DFix formatting
* fix(get_api_base.py): return api base in streaming response
Fixes https://github.com/BerriAI/litellm/issues/7249
Closes https://github.com/BerriAI/litellm/pull/7250
* fix(cost_calculator.py): only set base model to model if not none
Fixes https://github.com/BerriAI/litellm/issues/7223
* fix(cost_calculator.py): enforce stricter order when picking model for cost calculation
* fix(cost_calculator.py): fix '_select_model_name_for_cost_calc' to return model name with region name prefix if provided
* fix(utils.py): fix 'get_model_info()' to handle edge case where model name starts with custom llm provider AND custom llm provider is given
* fix(cost_calculator.py): handle `custom_llm_provider-` scenario
* fix(cost_calculator.py): e2e working tts cost tracking
ensures initial message is passed in, to cost calculator
* fix(factory.py): suppress linting errors
* fix(cost_calculator.py): strip llm provider from model name after selecting cost calc model
* fix(litellm_logging.py): store initial request in 'input' field + accept base_model to be passed in litellm_params directly
* test: handle none env var value in flaky test
* fix(litellm_logging.py): fix linting errors
---------
Co-authored-by: Sam B <samlingx@gmail.com>
* fix(utils.py): fix openai-like api response format parsing
Fixes issue passing structured output to litellm_proxy/ route
* fix(cost_calculator.py): fix whisper transcription cost calc to use file duration, not response time
'
* test: skip test if credentials not found
* docs(input.md): document 'extra_headers' param support
* fix: #7239 to move Nova topK parameter to `additionalModelRequestFields` (#7240)
Co-authored-by: Ryan Hoium <rhoium>
---------
Co-authored-by: ryanh-ai <3118399+ryanh-ai@users.noreply.github.com>
* fix(router.py): fix reading + using deployment-specific num retries on router
Fixes https://github.com/BerriAI/litellm/issues/7001
* fix(router.py): ensure 'timeout' in litellm_params overrides any value in router settings
Refactors all routes to use common '_update_kwargs_with_deployment' which has the timeout handling
* fix(router.py): fix timeout check
* fix(main.py): fix retries being multiplied when using openai sdk
Closes https://github.com/BerriAI/litellm/pull/7130
* docs(prompt_management.md): add langfuse prompt management doc
* feat(team_endpoints.py): allow teams to add their own models
Enables teams to call their own finetuned models via the proxy
* test: add better enforcement check testing for `/model/new` now that teams can add their own models
* docs(team_model_add.md): tutorial for allowing teams to add their own models
* test: fix test
* fix test_deployment_budget_limits_e2e_test
* refactor async_log_success_event to track spend for provider + deployment
* fix format
* rename class to RouterBudgetLimiting
* rename func
* rename types used for budgets
* add new types for deployment budgets
* add budget limits for deployments
* fix checking budgets set for provider
* update file names
* fix linting error
* _track_provider_remaining_budget_prometheus
* async_filter_deployments
* fix model list passed to router
* update error
* test_deployment_budgets_e2e_test_expect_to_fail
* fix test case
* run deployment budget limits