* fix(route_llm_request.py): move to using common router, even for client-side credentials
ensures fallbacks / cooldown logic still works
* test(test_route_llm_request.py): add unit test for route request
* feat(router.py): generate unique model id when clientside credential passed in
Prevents cooldowns for api key 1 from impacting api key 2
* test(test_router.py): update testing to ensure original litellm params not mutated
* fix(router.py): upsert clientside call into llm router model list
enables cooldown logic to work accurately
* fix: fix linting error
* test(test_router_utils.py): add direct test for new util on router
* fix(streaming_handler.py): fix deepseek reasoning content streaming
Fixes https://github.com/BerriAI/litellm/issues/8939
* test(test_streaming_handler.py): add unit test to streaming handle 'is_chunk_non_empty' function
ensures 'reasoning_content' is handled correctly
* fix(proxy_server.py): fix setting router redis cache, if cache enabled on litellm_settings
enables configurations like namespace to just work
* fix(redis_cache.py): fix key for async increment, to use the set namespace
prevents collisions if redis instance shared across environments
* fix load tests on litellm release notes
* fix caching on main branch (#8858)
* fix(streaming_handler.py): fix is delta empty check to handle empty str
* fix(streaming_handler.py): fix delta chunk on final response
* [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860)
* fix deepseek error
* test_deepseek_provider_async_completion
* fix get_complete_url
* bump: version 1.61.17 → 1.61.18
* bump: version 1.61.18 → 1.61.19
* vertex ai anthropic thinking param support (#8853)
* fix(vertex_llm_base.py): handle credentials passed in as dictionary
* fix(router.py): support vertex credentials as json dict
* test(test_vertex.py): allows easier testing
mock anthropic thinking response for vertex ai
* test(vertex_ai_partner_models/): don't remove "@" from model
breaks anthropic cost calculation
* test: move testing
* fix: fix linting error
* fix: fix linting error
* fix(vertex_ai_partner_models/main.py): split @ for codestral model
* test: fix test
* fix: fix stripping "@" on mistral models
* fix: fix test
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* ui fix leftnav, allow internal users to view their own logs
* pass user_id in uiSpendLogs call
* ui filter logs for internal user
* fix internal users page
* ui show correct message when store prompts is disabled
* fix internal user logs
* test_ui_view_spend_logs_with_user_id
* test spend management endpoint
* fix test_moderations_bad_model
* use async_post_call_failure_hook
* basic logging errors in DB
* show status on ui
* show status on ui
* ui show request / response side by side
* stash fixes
* working, track raw request
* track error info in metadata
* fix showing error / request / response logs
* show traceback on error viewer
* ui with traceback of error
* fix async_post_call_failure_hook
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* test_get_error_information
* fix code quality
* rename proxy track cost callback test
* _should_store_errors_in_spend_logs
* feature flag error logs
* Revert "_should_store_errors_in_spend_logs"
This reverts commit 7f345df477.
* Revert "feature flag error logs"
This reverts commit 0e90c022bb.
* test_spend_logs_payload
* fix OTEL log_db_metrics
* fix import json
* fix ui linting error
* test_async_post_call_failure_hook
* test_chat_completion_bad_model_with_spend_logs
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* test_openai_assistants_e2e_operations
* test openai assistants pass through
* fix GET request on pass through handler
* _make_non_streaming_http_request
* _is_assistants_api_request
* test_openai_assistants_e2e_operations
* test_openai_assistants_e2e_operations
* openai_proxy_route
* docs openai pass through
* docs openai pass through
* docs openai pass through
* test pass through handler
* Potential fix for code scanning alert no. 2240: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix running litellm on windows
* fix importing litellm
* _init_hypercorn_server
* linting fix
* TestProxyInitializationHelpers
* ci/cd run again
* ci/cd run again
* feat(bedrock/rerank): infer model region if model given as arn
* test: add unit testing to ensure bedrock region name inferred from arn on rerank
* feat(bedrock/rerank/transformation.py): include search units for bedrock rerank result
Resolves https://github.com/BerriAI/litellm/issues/7258#issuecomment-2671557137
* test(test_bedrock_completion.py): add testing for bedrock cohere rerank
* feat(cost_calculator.py): refactor rerank cost tracking to support bedrock cost tracking
* build(model_prices_and_context_window.json): add amazon.rerank model to model cost map
* fix(cost_calculator.py): bedrock/common_utils.py
get base model from model w/ arn -> handles rerank model
* build(model_prices_and_context_window.json): add bedrock cohere rerank pricing
* feat(bedrock/rerank): migrate bedrock config to basererank config
* Revert "feat(bedrock/rerank): migrate bedrock config to basererank config"
This reverts commit 84fae1f167.
* test: add testing to ensure large doc / queries are correctly counted
* Revert "test: add testing to ensure large doc / queries are correctly counted"
This reverts commit 4337f1657e.
* fix(migrate-jina-ai-to-rerank-config): enables cost tracking
* refactor(jina_ai/): finish migrating jina ai to base rerank config
enables cost tracking
* fix(jina_ai/rerank): e2e jina ai rerank cost tracking
* fix: cleanup dead code
* fix: fix python3.8 compatibility error
* test: fix test
* test: add e2e testing for azure ai rerank
* fix: fix linting error
* test: mark cohere as flaky