Krrish Dholakia
|
f9034ffcee
|
test: rename test to run earlier
|
2024-08-28 10:52:12 -07:00 |
|
Krrish Dholakia
|
84137afdd8
|
test: fix test
|
2024-08-28 10:50:53 -07:00 |
|
Krrish Dholakia
|
33972cc79c
|
fix(router.py): enable dynamic retry after in exception string
Updates cooldown logic to cooldown individual models
Closes https://github.com/BerriAI/litellm/issues/1339
|
2024-08-24 16:59:30 -07:00 |
|
Krrish Dholakia
|
76834c6c59
|
test(test_router.py): add test to ensure retry-after matches received value
|
2024-08-24 15:21:04 -07:00 |
|
Krrish Dholakia
|
7beb0910c6
|
test(test_router.py): skip test - create separate pr to match retry after
|
2024-08-24 15:19:27 -07:00 |
|
Krrish Dholakia
|
de2373d52b
|
fix(openai.py): coverage for correctly re-raising exception headers on openai chat completion + embedding endpoints
|
2024-08-24 12:55:15 -07:00 |
|
Krrish Dholakia
|
068aafdff9
|
fix(utils.py): correctly re-raise the headers from an exception, if present
Fixes issue where retry after on router was not using azure / openai numbers
|
2024-08-24 12:30:30 -07:00 |
|
Krrish Dholakia
|
5a2c9d5121
|
test(test_router.py): add test to ensure error is correctly re-raised
|
2024-08-24 10:08:14 -07:00 |
|
Krrish Dholakia
|
0b06a76cf9
|
fix(router.py): don't cooldown on apiconnectionerrors
Fixes issue where model would be in cooldown due to api connection errors
|
2024-08-24 09:53:05 -07:00 |
|
Ishaan Jaff
|
d42949cb4a
|
test_router_provider_wildcard_routing
|
2024-08-07 14:12:40 -07:00 |
|
Ishaan Jaff
|
3249e295cb
|
test provider wildcard routing
|
2024-08-07 13:52:00 -07:00 |
|
Krrish Dholakia
|
cd94c3adc1
|
fix(types/router.py): remove model_info pydantic field
Fixes https://github.com/BerriAI/litellm/issues/5042
|
2024-08-05 09:58:44 -07:00 |
|
Krrish Dholakia
|
826bb125e8
|
test(test_router.py): handle azure api instability
|
2024-07-25 19:54:40 -07:00 |
|
Krish Dholakia
|
8661da1980
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-06 19:12:06 -07:00 |
|
Krrish Dholakia
|
67433a04a2
|
test: fix test
|
2024-07-02 22:13:41 -07:00 |
|
Krrish Dholakia
|
4b17f2dfdb
|
test: skip bad test
|
2024-07-02 17:46:50 -07:00 |
|
Krrish Dholakia
|
0894439118
|
test(test_router.py): fix test
|
2024-07-02 17:45:33 -07:00 |
|
Krish Dholakia
|
d38f01e956
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-02 17:17:43 -07:00 |
|
Ishaan Jaff
|
d25b079caf
|
fix img gen test
|
2024-06-29 20:54:22 -07:00 |
|
Ishaan Jaff
|
0bda80ddea
|
test- router when using openai prefix
|
2024-06-29 17:28:08 -07:00 |
|
Krrish Dholakia
|
c9a424d28d
|
fix(router.py): fix get_router_model_info for azure models
|
2024-06-28 22:13:29 -07:00 |
|
Krrish Dholakia
|
aa6f7665c4
|
fix(router.py): only return 'max_tokens', 'input_cost_per_token', etc. in 'get_router_model_info' if base_model is set
|
2024-06-28 10:45:31 -07:00 |
|
Krrish Dholakia
|
98daedaf60
|
fix(router.py): fix setting httpx mounts
|
2024-06-26 17:22:04 -07:00 |
|
Krrish Dholakia
|
341c7857c1
|
test(test_router.py): add testing
|
2024-06-24 17:28:12 -07:00 |
|
Krrish Dholakia
|
f5fbdf0fee
|
fix(router.py): use user-defined model_input_tokens for pre-call filter checks
|
2024-06-24 17:25:26 -07:00 |
|
Krrish Dholakia
|
a31a05d45d
|
feat(dynamic_rate_limiter.py): working e2e
|
2024-06-22 14:41:22 -07:00 |
|
Krrish Dholakia
|
068e8dff5b
|
feat(dynamic_rate_limiter.py): passing base case
|
2024-06-21 22:46:46 -07:00 |
|
Krrish Dholakia
|
06b297a6e8
|
fix(router.py): fix set_client init to check if custom_llm_provider is azure not if in model name
fixes issue where 'azure_ai/' was being init as azureopenai client
|
2024-06-21 17:09:20 -07:00 |
|
Krrish Dholakia
|
16889b8478
|
feat(router.py): allow user to call specific deployment via id
Allows easier health checks for specific deployments by just passing in model id
|
2024-06-19 13:02:46 -07:00 |
|
Krrish Dholakia
|
14b66c3daa
|
fix(router.py): support multiple orgs in 1 model definition
Closes https://github.com/BerriAI/litellm/issues/3949
|
2024-06-18 19:36:58 -07:00 |
|
Krrish Dholakia
|
6306914e56
|
fix(types/router.py): modelgroupinfo to handle mode being None and supported_openai_params not being a list
|
2024-06-08 20:13:45 -07:00 |
|
Krrish Dholakia
|
a7dcf25722
|
feat(router.py): enable settting 'order' for a deployment in model list
Allows user to control which model gets called first in model group
|
2024-06-06 09:46:51 -07:00 |
|
Krrish Dholakia
|
1d18ca6a7d
|
fix(router.py): security fix - don't show api key in invalid model setup error message
|
2024-05-29 16:14:57 -07:00 |
|
Krrish Dholakia
|
cc41db018f
|
test(test_router.py): fix testing
|
2024-05-21 17:31:31 -07:00 |
|
Krrish Dholakia
|
988970f4c2
|
feat(router.py): Fixes https://github.com/BerriAI/litellm/issues/3769
|
2024-05-21 17:24:51 -07:00 |
|
Krrish Dholakia
|
1312eece6d
|
fix(router.py): overloads for better router.acompletion typing
|
2024-05-13 14:27:16 -07:00 |
|
Krrish Dholakia
|
ebc927f1c8
|
feat(router.py): allow setting model_region in litellm_params
Closes https://github.com/BerriAI/litellm/issues/3580
|
2024-05-11 10:18:08 -07:00 |
|
Krrish Dholakia
|
1baad80c7d
|
fix(router.py): cooldown deployments, for 401 errors
|
2024-04-30 17:54:00 -07:00 |
|
Krrish Dholakia
|
87ff26ff27
|
fix(router.py): unify retry timeout logic across sync + async function_with_retries
|
2024-04-30 15:23:19 -07:00 |
|
Krrish Dholakia
|
280148543f
|
fix(router.py): fix trailing slash handling for api base which contains /v1
|
2024-04-27 17:36:28 -07:00 |
|
Krish Dholakia
|
1a06f009d1
|
Merge branch 'main' into litellm_default_router_retries
|
2024-04-27 11:21:57 -07:00 |
|
Krrish Dholakia
|
e05764bdb7
|
fix(router.py): add /v1/ if missing to base url, for openai-compatible api's
Fixes https://github.com/BerriAI/litellm/issues/2279
|
2024-04-26 17:05:07 -07:00 |
|
Krrish Dholakia
|
180718c33f
|
fix(router.py): support verify_ssl flag
Fixes https://github.com/BerriAI/litellm/issues/3162#issuecomment-2075273807
|
2024-04-26 15:38:01 -07:00 |
|
Krrish Dholakia
|
7730520fb0
|
fix(router.py): allow passing httpx.timeout to timeout param in router
Closes https://github.com/BerriAI/litellm/issues/3162
|
2024-04-26 14:57:19 -07:00 |
|
Krrish Dholakia
|
160acc085a
|
fix(router.py): fix default retry logic
|
2024-04-25 11:57:27 -07:00 |
|
Ishaan Jaff
|
4e707af592
|
Revert "fix(router.py): fix max retries on set_client"
This reverts commit 821844c1a3 .
|
2024-04-24 23:19:14 -07:00 |
|
Krrish Dholakia
|
821844c1a3
|
fix(router.py): fix max retries on set_client
|
2024-04-24 22:03:01 -07:00 |
|
Krrish Dholakia
|
84d43484c6
|
fix(router.py): make sure pre call rpm check runs even when model not in model cost map
|
2024-04-11 09:27:46 -07:00 |
|
Krrish Dholakia
|
a47a719caa
|
fix(router.py): generate consistent model id's
having the same id for a deployment, lets redis usage caching work across multiple instances
|
2024-04-10 15:23:57 -07:00 |
|
Ishaan Jaff
|
a55f3cdace
|
test - router re-use openai client
|
2024-04-06 11:33:17 -07:00 |
|