Ishaan Jaff
|
2fdd41ac1e
|
Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
[Feat] Litellm.Router set custom cooldown times
|
2024-01-23 13:47:37 -08:00 |
|
ishaan-jaff
|
ea94011952
|
(fix) router - update model_group on fallback
|
2024-01-23 13:47:37 -08:00 |
|
ishaan-jaff
|
3692d7ffa4
|
(fix) revert router.py to stable version
|
2024-01-23 13:47:37 -08:00 |
|
ishaan-jaff
|
69530ffb70
|
Revert "(feat) add typehints for litellm.acompletion"
This reverts commit a9cf6cec80 .
|
2024-01-23 13:47:37 -08:00 |
|
ishaan-jaff
|
9192712841
|
Revert "v0"
This reverts commit b730482aaf .
|
2024-01-23 13:47:37 -08:00 |
|
Krrish Dholakia
|
0783bd1785
|
fix(router.py): fix dereferencing param order
|
2024-01-23 13:47:37 -08:00 |
|
Krrish Dholakia
|
93866b8cd5
|
fix(router.py): ensure no unsupported args are passed to completion()
|
2024-01-23 13:47:37 -08:00 |
|
Krrish Dholakia
|
eb46ea8f8b
|
fix(router.py): fix client init
|
2024-01-23 13:47:37 -08:00 |
|
Krrish Dholakia
|
b78b99c8a0
|
fix(router.py): fix order of dereferenced dictionaries
|
2024-01-23 13:47:37 -08:00 |
|
ishaan-jaff
|
14585c9966
|
(fix) router - update model_group on fallback
|
2024-01-22 14:41:55 -08:00 |
|
Ishaan Jaff
|
435d4b9279
|
Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
[Feat] Litellm.Router set custom cooldown times
|
2024-01-19 20:49:17 -08:00 |
|
ishaan-jaff
|
84684c50fa
|
(fix) router - timeout exception mapping
|
2024-01-19 20:30:41 -08:00 |
|
ishaan-jaff
|
16b688d1ff
|
(feat) router - set custom cooldown times
|
2024-01-19 19:43:41 -08:00 |
|
ishaan-jaff
|
91e57bd039
|
(fix) add router typehints
|
2024-01-19 16:32:43 -08:00 |
|
ishaan-jaff
|
a9cf6cec80
|
(feat) add typehints for litellm.acompletion
|
2024-01-19 16:05:26 -08:00 |
|
ishaan-jaff
|
b730482aaf
|
v0
|
2024-01-19 15:49:37 -08:00 |
|
ishaan-jaff
|
8c0b7b1015
|
(feat) - improve router logging/debugging messages
|
2024-01-19 13:57:33 -08:00 |
|
ishaan-jaff
|
7b2c15aa51
|
(feat) improve litellm.Router logging
|
2024-01-19 12:28:51 -08:00 |
|
ishaan-jaff
|
79c412cab5
|
(feat) set Azure vision enhancement params using os.environ
|
2024-01-17 21:23:40 -08:00 |
|
ishaan-jaff
|
0c4b86c211
|
(feat) litellm router - Azure, use base_url when set
|
2024-01-17 10:24:30 -08:00 |
|
Krrish Dholakia
|
40c7400894
|
fix(router.py): bump httpx pool limits
|
2024-01-11 12:51:29 +05:30 |
|
Krrish Dholakia
|
bb04a340a5
|
fix(lowest_latency.py): add back tpm/rpm checks, configurable time window
|
2024-01-10 20:52:01 +05:30 |
|
Krrish Dholakia
|
1ca7747371
|
fix(router.py): azure client init fix
|
2024-01-08 14:56:57 +05:30 |
|
Krrish Dholakia
|
1a480b3bd2
|
refactor: trigger dockerbuild
|
2024-01-08 14:42:28 +05:30 |
|
Ishaan Jaff
|
a70626d6e9
|
Merge pull request #1356 from BerriAI/litellm_improve_proxy_logs
[Feat] Improve Proxy Logging
|
2024-01-08 14:41:01 +05:30 |
|
Krrish Dholakia
|
ec83243521
|
fix(router.py): increasing connection pool limits for azure router
|
2024-01-08 14:39:49 +05:30 |
|
ishaan-jaff
|
b4d9754dc2
|
(feat) verbose logs + fallbacks - working well
|
2024-01-08 12:33:09 +05:30 |
|
ishaan-jaff
|
7e4f5e5fbd
|
(feat) log what model is being used as a fallback
|
2024-01-08 09:41:24 +05:30 |
|
ishaan-jaff
|
f9d75233de
|
(feat) move litellm router - to use logging.debug, logging.info
|
2024-01-08 09:31:29 +05:30 |
|
ishaan-jaff
|
ccd100fab3
|
(fix) improve logging when no fallbacks found
|
2024-01-08 08:53:40 +05:30 |
|
Krrish Dholakia
|
2d8d7e3569
|
perf(router.py): don't use asyncio.wait for - just pass it to the completion call for timeouts
|
2024-01-06 17:05:55 +05:30 |
|
Krrish Dholakia
|
25241de69e
|
fix(router.py): don't retry malformed / content policy violating errors (400 status code)
https://github.com/BerriAI/litellm/issues/1317 , https://github.com/BerriAI/litellm/issues/1316
|
2024-01-04 22:23:51 +05:30 |
|
ishaan-jaff
|
6d21ee3a2f
|
(fix) proxy - cloudflare + Azure bug [non-streaming]
|
2024-01-04 10:24:51 +05:30 |
|
Krrish Dholakia
|
a37a18ca80
|
feat(router.py): add support for retry/fallbacks for async embedding calls
|
2024-01-02 11:54:28 +05:30 |
|
Krrish Dholakia
|
c12e3bd565
|
fix(router.py): fix model name passed through
|
2024-01-02 11:15:30 +05:30 |
|
Krrish Dholakia
|
dff4c172d0
|
refactor(test_router_caching.py): move tpm/rpm routing tests to separate file
|
2024-01-02 11:10:11 +05:30 |
|
Krrish Dholakia
|
a83e2e07cf
|
fix(router.py): correctly raise no model available error
https://github.com/BerriAI/litellm/issues/1289
|
2024-01-01 21:22:42 +05:30 |
|
Krrish Dholakia
|
027218c3f0
|
test(test_lowest_latency_routing.py): add more tests
|
2023-12-30 17:41:42 +05:30 |
|
Krrish Dholakia
|
f2d0d5584a
|
fix(router.py): fix latency based routing
|
2023-12-30 17:25:40 +05:30 |
|
Krrish Dholakia
|
69935db239
|
fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue
|
2023-12-30 15:48:34 +05:30 |
|
Krrish Dholakia
|
b66cf0aa43
|
fix(lowest_tpm_rpm_routing.py): broaden scope of get deployment logic
|
2023-12-30 13:27:50 +05:30 |
|
Krrish Dholakia
|
38f55249e1
|
fix(router.py): support retry and fallbacks for atext_completion
|
2023-12-30 11:19:32 +05:30 |
|
ishaan-jaff
|
459ba5b45e
|
(feat) router, add ModelResponse type hints
|
2023-12-30 10:44:13 +05:30 |
|
Krrish Dholakia
|
a34de56289
|
fix(router.py): handle initial scenario for tpm/rpm routing
|
2023-12-30 07:28:45 +05:30 |
|
Krrish Dholakia
|
2fc264ca04
|
fix(router.py): fix int logic
|
2023-12-29 20:41:56 +05:30 |
|
Krrish Dholakia
|
cf91e49c87
|
refactor(lowest_tpm_rpm.py): move tpm/rpm based routing to a separate file for better testing
|
2023-12-29 18:33:43 +05:30 |
|
Krrish Dholakia
|
678bbfa9be
|
fix(least_busy.py): support consistent use of model id instead of deployment name
|
2023-12-29 17:05:26 +05:30 |
|
Krrish Dholakia
|
cbdfae1267
|
fix(router.py): support wait_for for async completion calls
|
2023-12-29 15:27:20 +05:30 |
|
Krrish Dholakia
|
4882325c35
|
feat(router.py): support 'retry_after' param, to set min timeout before retrying a failed request (default 0)
|
2023-12-29 15:18:28 +05:30 |
|
Krrish Dholakia
|
235526625d
|
feat(proxy_server.py): support maxage cache control
|
2023-12-26 17:50:27 +05:30 |
|