Krish Dholakia
18da7adce9
feat(router.py): Support Loadbalancing batch azure api endpoints ( #5469 )
...
* feat(router.py): initial commit for loadbalancing azure batch api endpoints
Closes https://github.com/BerriAI/litellm/issues/5396
* fix(router.py): working `router.acreate_file()`
* feat(router.py): working router.acreate_batch endpoint
* feat(router.py): expose router.aretrieve_batch function
Make it easy for user to retrieve the batch information
* feat(router.py): support 'router.alist_batches' endpoint
Adds support for getting all batches across all endpoints
* feat(router.py): working loadbalancing on `/v1/files`
* feat(proxy_server.py): working loadbalancing on `/v1/batches`
* feat(proxy_server.py): working loadbalancing on Retrieve + List batch
2024-09-02 21:32:55 -07:00
Krish Dholakia
ca4e746545
LiteLLM minor fixes + improvements (31/08/2024) ( #5464 )
...
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints
* test(test_streaming.py): skip model due to end of life
* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits
Closes https://github.com/BerriAI/litellm/issues/4096
2024-09-01 13:31:42 -07:00
Krish Dholakia
321b0961b5
fix: Minor LiteLLM Fixes + Improvements (29/08/2024) ( #5436 )
...
* fix(model_checks.py): support returning wildcard models on `/v1/models`
Fixes https://github.com/BerriAI/litellm/issues/4903
* fix(bedrock_httpx.py): support calling bedrock via api_base
Closes https://github.com/BerriAI/litellm/pull/4587
* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked
Fixes https://github.com/BerriAI/litellm/issues/5433
* feat(router.py): support setting 'weight' param for models on router
Closes https://github.com/BerriAI/litellm/issues/5410
* test(test_bedrock_completion.py): add unit test for custom api base
* fix(model_checks.py): handle no "/" in model
2024-08-29 22:40:25 -07:00
Krrish Dholakia
6e4f0a95da
fix(router.py): fix cooldown check
2024-08-28 16:38:42 -07:00
Ishaan Jaff
df4f1458e6
feat - add rerank on proxy
2024-08-27 17:36:40 -07:00
Krrish Dholakia
c558648180
fix(router.py): fix aembedding type hints
...
Fixes https://github.com/BerriAI/litellm/issues/5383
2024-08-27 14:29:18 -07:00
Krrish Dholakia
c795e9feeb
fix(router.py): enable dynamic retry after in exception string
...
Updates cooldown logic to cooldown individual models
Closes https://github.com/BerriAI/litellm/issues/1339
2024-08-24 16:59:30 -07:00
Krrish Dholakia
27a5cd12e0
fix(utils.py): correctly re-raise the headers from an exception, if present
...
Fixes issue where retry after on router was not using azure / openai numbers
2024-08-24 12:30:30 -07:00
Krrish Dholakia
6415f92bbb
fix(router.py): don't cooldown on apiconnectionerrors
...
Fixes issue where model would be in cooldown due to api connection errors
2024-08-24 09:53:05 -07:00
Krrish Dholakia
45048ee006
fix(router.py): fix linting error
2024-08-21 15:35:10 -07:00
Ishaan Jaff
528bb3f7ac
test test_using_default_working_fallback
2024-08-20 13:32:55 -07:00
Ishaan Jaff
165e0e3ad1
fix run sync fallbacks
2024-08-20 12:55:36 -07:00
Ishaan Jaff
078fe97053
fix fallbacks dont recurse on the same fallback
2024-08-20 12:50:20 -07:00
Ishaan Jaff
fb16ff2335
fix don't retry errors when no healthy deployments available
2024-08-20 12:17:05 -07:00
Ishaan Jaff
5e2f962ba3
test + never retry on 404 errors
2024-08-20 11:59:43 -07:00
Ishaan Jaff
7171efc729
use model access groups for teams
2024-08-17 16:45:53 -07:00
Krrish Dholakia
2874b94fb1
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 09:22:47 -07:00
Ishaan Jaff
25af3ffe5b
v0 track fallback events
2024-08-10 13:31:00 -07:00
Krrish Dholakia
482acc7ee1
fix(router.py): fallback on 400-status code requests
2024-08-09 12:16:49 -07:00
Krrish Dholakia
07e5847e65
feat(router.py): allow using .acompletion() for request prioritization
...
allows /chat/completion endpoint to work for request prioritization calls
2024-08-07 16:43:12 -07:00
Ishaan Jaff
a0b2c107c4
fix getting provider_specific_deployment
2024-08-07 15:20:59 -07:00
Ishaan Jaff
31e4fca748
fix use provider specific routing
2024-08-07 14:37:20 -07:00
Ishaan Jaff
bb9493e5f7
router use provider specific wildcard routing
2024-08-07 14:12:10 -07:00
Ishaan Jaff
6a1a4eb822
add + test provider specific routing
2024-08-07 13:49:46 -07:00
Krrish Dholakia
0de640700d
fix(router.py): add reason for fallback failure to client-side exception string
...
make it easier to debug why a fallback failed to occur
2024-08-07 13:02:47 -07:00
Ishaan Jaff
0dd8f50477
use router_cooldown_handler
2024-08-07 10:40:55 -07:00
Krrish Dholakia
fdb47e5479
fix: fix test to specify allowed_fails
2024-08-05 21:39:59 -07:00
Krrish Dholakia
934883999a
fix(router.py): move deployment cooldown list message to error log, not client-side
...
don't show user all deployments
2024-08-03 12:49:39 -07:00
Krrish Dholakia
b0d2727bbf
feat(router.py): add flag for mock testing loadbalancing for rate limit errors
2024-08-03 12:34:11 -07:00
Krrish Dholakia
dc58b9f33e
fix(utils.py): fix linting errors
2024-07-30 18:38:10 -07:00
Krrish Dholakia
96ad9c877c
fix(router.py): gracefully handle scenario where completion response doesn't have total tokens
...
Closes https://github.com/BerriAI/litellm/issues/4968
2024-07-30 15:14:03 -07:00
Krrish Dholakia
3a1eedfbf3
feat(ollama_chat.py): support ollama tool calling
...
Closes https://github.com/BerriAI/litellm/issues/4812
2024-07-26 21:51:54 -07:00
Krrish Dholakia
e39ff46222
docs(config.md): update wildcard docs
2024-07-26 08:59:53 -07:00
Ishaan Jaff
a46c463dee
router support setting pass_through_all_models
2024-07-25 18:34:12 -07:00
Krrish Dholakia
1d33759bb1
fix(router.py): add support for diskcache to router
2024-07-25 14:30:46 -07:00
Ishaan Jaff
7888074012
fix - test router debug logs
2024-07-20 18:45:31 -07:00
Ishaan Jaff
fcee8bc61f
router - use verbose logger when using litellm.Router
2024-07-20 17:36:25 -07:00
Ishaan Jaff
d1a4246d2b
control using enable_tag_filtering
2024-07-18 19:39:04 -07:00
Ishaan Jaff
cd40d58544
router - refactor to tag based routing
2024-07-18 19:22:09 -07:00
Ishaan Jaff
778cb8799e
Merge pull request #4786 from BerriAI/litellm_use_model_tier_keys
...
[Feat-Enterprise] Use free/paid tiers for Virtual Keys
2024-07-18 18:07:09 -07:00
Krrish Dholakia
5d0bb0c6ee
fix(utils.py): fix status code in exception mapping
2024-07-18 18:04:59 -07:00
Ishaan Jaff
d4cad75d34
router - use free paid tier routing
2024-07-18 17:09:42 -07:00
Krrish Dholakia
432b7ae264
fix(router.py): check for request_timeout in acompletion
...
support 'request_timeout' param in router acompletion
2024-07-17 17:19:06 -07:00
Ishaan Jaff
dc5c72d04e
router return get_deployment_by_model_group_name
2024-07-15 19:27:12 -07:00
Krish Dholakia
f4d140efec
Merge pull request #4635 from BerriAI/litellm_anthropic_adapter
...
Anthropic `/v1/messages` endpoint support
2024-07-10 22:41:53 -07:00
Krrish Dholakia
48be4ce805
feat(proxy_server.py): working /v1/messages
with config.yaml
...
Adds async router support for adapter_completion call
2024-07-10 18:53:54 -07:00
Ishaan Jaff
a9e15dad62
feat - add DELETE assistants endpoint
2024-07-10 11:37:37 -07:00
Ishaan Jaff
5880adea95
router - add acreate_assistants
2024-07-09 09:46:28 -07:00
Krish Dholakia
c6b6dbeb6b
Merge branch 'main' into litellm_fix_httpx_transport
2024-07-06 19:12:06 -07:00
Ishaan Jaff
f6eccf84ce
use helper for init client + check if we should init sync clients
2024-07-06 12:52:41 -07:00