Krish Dholakia
|
d38f01e956
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-02 17:17:43 -07:00 |
|
Ishaan Jaff
|
d25b079caf
|
fix img gen test
|
2024-06-29 20:54:22 -07:00 |
|
Ishaan Jaff
|
0bda80ddea
|
test- router when using openai prefix
|
2024-06-29 17:28:08 -07:00 |
|
Krrish Dholakia
|
c9a424d28d
|
fix(router.py): fix get_router_model_info for azure models
|
2024-06-28 22:13:29 -07:00 |
|
Krrish Dholakia
|
aa6f7665c4
|
fix(router.py): only return 'max_tokens', 'input_cost_per_token', etc. in 'get_router_model_info' if base_model is set
|
2024-06-28 10:45:31 -07:00 |
|
Krrish Dholakia
|
98daedaf60
|
fix(router.py): fix setting httpx mounts
|
2024-06-26 17:22:04 -07:00 |
|
Krrish Dholakia
|
341c7857c1
|
test(test_router.py): add testing
|
2024-06-24 17:28:12 -07:00 |
|
Krrish Dholakia
|
f5fbdf0fee
|
fix(router.py): use user-defined model_input_tokens for pre-call filter checks
|
2024-06-24 17:25:26 -07:00 |
|
Krrish Dholakia
|
a31a05d45d
|
feat(dynamic_rate_limiter.py): working e2e
|
2024-06-22 14:41:22 -07:00 |
|
Krrish Dholakia
|
068e8dff5b
|
feat(dynamic_rate_limiter.py): passing base case
|
2024-06-21 22:46:46 -07:00 |
|
Krrish Dholakia
|
06b297a6e8
|
fix(router.py): fix set_client init to check if custom_llm_provider is azure not if in model name
fixes issue where 'azure_ai/' was being init as azureopenai client
|
2024-06-21 17:09:20 -07:00 |
|
Krrish Dholakia
|
16889b8478
|
feat(router.py): allow user to call specific deployment via id
Allows easier health checks for specific deployments by just passing in model id
|
2024-06-19 13:02:46 -07:00 |
|
Krrish Dholakia
|
14b66c3daa
|
fix(router.py): support multiple orgs in 1 model definition
Closes https://github.com/BerriAI/litellm/issues/3949
|
2024-06-18 19:36:58 -07:00 |
|
Krrish Dholakia
|
6306914e56
|
fix(types/router.py): modelgroupinfo to handle mode being None and supported_openai_params not being a list
|
2024-06-08 20:13:45 -07:00 |
|
Krrish Dholakia
|
a7dcf25722
|
feat(router.py): enable settting 'order' for a deployment in model list
Allows user to control which model gets called first in model group
|
2024-06-06 09:46:51 -07:00 |
|
Krrish Dholakia
|
1d18ca6a7d
|
fix(router.py): security fix - don't show api key in invalid model setup error message
|
2024-05-29 16:14:57 -07:00 |
|
Krrish Dholakia
|
cc41db018f
|
test(test_router.py): fix testing
|
2024-05-21 17:31:31 -07:00 |
|
Krrish Dholakia
|
988970f4c2
|
feat(router.py): Fixes https://github.com/BerriAI/litellm/issues/3769
|
2024-05-21 17:24:51 -07:00 |
|
Krrish Dholakia
|
1312eece6d
|
fix(router.py): overloads for better router.acompletion typing
|
2024-05-13 14:27:16 -07:00 |
|
Krrish Dholakia
|
ebc927f1c8
|
feat(router.py): allow setting model_region in litellm_params
Closes https://github.com/BerriAI/litellm/issues/3580
|
2024-05-11 10:18:08 -07:00 |
|
Krrish Dholakia
|
1baad80c7d
|
fix(router.py): cooldown deployments, for 401 errors
|
2024-04-30 17:54:00 -07:00 |
|
Krrish Dholakia
|
87ff26ff27
|
fix(router.py): unify retry timeout logic across sync + async function_with_retries
|
2024-04-30 15:23:19 -07:00 |
|
Krrish Dholakia
|
280148543f
|
fix(router.py): fix trailing slash handling for api base which contains /v1
|
2024-04-27 17:36:28 -07:00 |
|
Krish Dholakia
|
1a06f009d1
|
Merge branch 'main' into litellm_default_router_retries
|
2024-04-27 11:21:57 -07:00 |
|
Krrish Dholakia
|
e05764bdb7
|
fix(router.py): add /v1/ if missing to base url, for openai-compatible api's
Fixes https://github.com/BerriAI/litellm/issues/2279
|
2024-04-26 17:05:07 -07:00 |
|
Krrish Dholakia
|
180718c33f
|
fix(router.py): support verify_ssl flag
Fixes https://github.com/BerriAI/litellm/issues/3162#issuecomment-2075273807
|
2024-04-26 15:38:01 -07:00 |
|
Krrish Dholakia
|
7730520fb0
|
fix(router.py): allow passing httpx.timeout to timeout param in router
Closes https://github.com/BerriAI/litellm/issues/3162
|
2024-04-26 14:57:19 -07:00 |
|
Krrish Dholakia
|
160acc085a
|
fix(router.py): fix default retry logic
|
2024-04-25 11:57:27 -07:00 |
|
Ishaan Jaff
|
4e707af592
|
Revert "fix(router.py): fix max retries on set_client"
This reverts commit 821844c1a3 .
|
2024-04-24 23:19:14 -07:00 |
|
Krrish Dholakia
|
821844c1a3
|
fix(router.py): fix max retries on set_client
|
2024-04-24 22:03:01 -07:00 |
|
Krrish Dholakia
|
84d43484c6
|
fix(router.py): make sure pre call rpm check runs even when model not in model cost map
|
2024-04-11 09:27:46 -07:00 |
|
Krrish Dholakia
|
a47a719caa
|
fix(router.py): generate consistent model id's
having the same id for a deployment, lets redis usage caching work across multiple instances
|
2024-04-10 15:23:57 -07:00 |
|
Ishaan Jaff
|
a55f3cdace
|
test - router re-use openai client
|
2024-04-06 11:33:17 -07:00 |
|
Krrish Dholakia
|
2e40ab959d
|
test(test_router.py): fix casting
|
2024-04-04 13:54:16 -07:00 |
|
Krrish Dholakia
|
c372c873a0
|
test(test_router.py): fix test to check cast
|
2024-04-04 13:32:50 -07:00 |
|
Krrish Dholakia
|
b9030be792
|
test(test_router.py): fix test to check type
|
2024-04-04 11:45:12 -07:00 |
|
Krrish Dholakia
|
f536fb13e6
|
fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances
https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
|
2024-04-03 20:16:41 -07:00 |
|
Krrish Dholakia
|
52b1538b2e
|
fix(router.py): support context window fallbacks for pre-call checks
|
2024-04-01 10:51:54 -07:00 |
|
Ishaan Jaff
|
6d408dcce7
|
(fix) test aimg gen on router
|
2024-03-28 12:27:26 -07:00 |
|
Krrish Dholakia
|
49e8cdbff9
|
fix(router.py): check for context window error when handling 400 status code errors
was causing proxy context window fallbacks to not work as expected
|
2024-03-26 08:08:15 -07:00 |
|
Krrish Dholakia
|
e8e7964025
|
docs(routing.md): add pre-call checks to docs
|
2024-03-23 19:10:34 -07:00 |
|
Krrish Dholakia
|
b7321ae4ee
|
fix(router.py): fix pre call check logic
|
2024-03-23 18:56:08 -07:00 |
|
Krrish Dholakia
|
eb3ca85d7e
|
feat(router.py): enable pre-call checks
filter models outside of context window limits of a given message for a model group
https://github.com/BerriAI/litellm/issues/872
|
2024-03-23 18:03:30 -07:00 |
|
Krrish Dholakia
|
478307d4cf
|
fix(bedrock.py): support anthropic messages api on bedrock (claude-3)
|
2024-03-04 17:15:47 -08:00 |
|
ishaan-jaff
|
9bac163e4e
|
(test) claude-instant-1
|
2024-03-04 08:32:13 -08:00 |
|
Krrish Dholakia
|
4c951d20bc
|
test: removing aws tests - account suspended - pending their approval
|
2024-02-28 13:46:20 -08:00 |
|
ishaan-jaff
|
693efc8e84
|
(feat) add moderation on router
|
2024-02-14 11:00:09 -08:00 |
|
ishaan-jaff
|
b0902f0a8c
|
(ci/cd) add more logging to timeout test
|
2024-01-23 18:39:19 -08:00 |
|
ishaan-jaff
|
b40176810e
|
(test) dynamic timeouts - router
|
2024-01-23 13:27:49 -08:00 |
|
Krrish Dholakia
|
05754ef238
|
test(test_router.py): add more testing for dynamically passing params to router
|
2024-01-23 10:31:49 -08:00 |
|