Commit graph

210 commits

Author SHA1 Message Date
ishaan-jaff
d9f74ef4a1 (feat) control ssl_verify on litellm.router 2024-02-01 20:36:50 -08:00
Krish Dholakia
2d5e639a09
Merge branch 'main' into litellm_http_proxy_support 2024-02-01 09:18:50 -08:00
Krrish Dholakia
a07f3ec2d4 fix(router.py): remove wrapping of router.completion() let clients handle this 2024-01-30 21:12:41 -08:00
ishaan-jaff
e011c4a989 (fix) use OpenAI organization in ahealth_check 2024-01-30 11:45:22 -08:00
ishaan-jaff
7fe8fff5d8 (router) set organization OpenAI 2024-01-30 10:54:05 -08:00
Ishaan Jaff
5e72d1901b Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
[Feat] Litellm.Router set custom cooldown times
2024-01-23 08:05:59 -08:00
ishaan-jaff
24358a2a3e (fix) router - update model_group on fallback 2024-01-23 08:04:29 -08:00
ishaan-jaff
22e26fcc4b (fix) revert router.py to stable version 2024-01-23 08:03:29 -08:00
ishaan-jaff
b4cc227d1c Revert "(feat) add typehints for litellm.acompletion"
This reverts commit a9cf6cec80.
2024-01-23 07:57:09 -08:00
ishaan-jaff
0124de558d Revert "v0"
This reverts commit b730482aaf.
2024-01-23 07:54:02 -08:00
Krrish Dholakia
1e3f14837b fix(router.py): fix dereferencing param order 2024-01-23 07:19:37 -08:00
Krrish Dholakia
53b879bc6c fix(router.py): ensure no unsupported args are passed to completion() 2024-01-22 22:33:06 -08:00
Krrish Dholakia
f19f0dad89 fix(router.py): fix client init 2024-01-22 22:15:39 -08:00
Krrish Dholakia
5e0d99b2ef fix(router.py): fix order of dereferenced dictionaries 2024-01-22 21:42:25 -08:00
ishaan-jaff
14585c9966 (fix) router - update model_group on fallback 2024-01-22 14:41:55 -08:00
Ishaan Jaff
435d4b9279
Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
[Feat] Litellm.Router set custom cooldown times
2024-01-19 20:49:17 -08:00
ishaan-jaff
84684c50fa (fix) router - timeout exception mapping 2024-01-19 20:30:41 -08:00
ishaan-jaff
16b688d1ff (feat) router - set custom cooldown times 2024-01-19 19:43:41 -08:00
ishaan-jaff
91e57bd039 (fix) add router typehints 2024-01-19 16:32:43 -08:00
ishaan-jaff
a9cf6cec80 (feat) add typehints for litellm.acompletion 2024-01-19 16:05:26 -08:00
ishaan-jaff
b730482aaf v0 2024-01-19 15:49:37 -08:00
ishaan-jaff
8c0b7b1015 (feat) - improve router logging/debugging messages 2024-01-19 13:57:33 -08:00
ishaan-jaff
7b2c15aa51 (feat) improve litellm.Router logging 2024-01-19 12:28:51 -08:00
Krrish Dholakia
8873fe9049 fix(router.py): support http and https proxys 2024-01-18 09:58:41 -08:00
ishaan-jaff
79c412cab5 (feat) set Azure vision enhancement params using os.environ 2024-01-17 21:23:40 -08:00
ishaan-jaff
0c4b86c211 (feat) litellm router - Azure, use base_url when set 2024-01-17 10:24:30 -08:00
Krrish Dholakia
40c7400894 fix(router.py): bump httpx pool limits 2024-01-11 12:51:29 +05:30
Krrish Dholakia
bb04a340a5 fix(lowest_latency.py): add back tpm/rpm checks, configurable time window 2024-01-10 20:52:01 +05:30
Krrish Dholakia
1ca7747371 fix(router.py): azure client init fix 2024-01-08 14:56:57 +05:30
Krrish Dholakia
1a480b3bd2 refactor: trigger dockerbuild 2024-01-08 14:42:28 +05:30
Ishaan Jaff
a70626d6e9
Merge pull request #1356 from BerriAI/litellm_improve_proxy_logs
[Feat] Improve Proxy Logging
2024-01-08 14:41:01 +05:30
Krrish Dholakia
ec83243521 fix(router.py): increasing connection pool limits for azure router 2024-01-08 14:39:49 +05:30
ishaan-jaff
b4d9754dc2 (feat) verbose logs + fallbacks - working well 2024-01-08 12:33:09 +05:30
ishaan-jaff
7e4f5e5fbd (feat) log what model is being used as a fallback 2024-01-08 09:41:24 +05:30
ishaan-jaff
f9d75233de (feat) move litellm router - to use logging.debug, logging.info 2024-01-08 09:31:29 +05:30
ishaan-jaff
ccd100fab3 (fix) improve logging when no fallbacks found 2024-01-08 08:53:40 +05:30
Krrish Dholakia
2d8d7e3569 perf(router.py): don't use asyncio.wait for - just pass it to the completion call for timeouts 2024-01-06 17:05:55 +05:30
Krrish Dholakia
25241de69e fix(router.py): don't retry malformed / content policy violating errors (400 status code)
https://github.com/BerriAI/litellm/issues/1317 , https://github.com/BerriAI/litellm/issues/1316
2024-01-04 22:23:51 +05:30
ishaan-jaff
6d21ee3a2f (fix) proxy - cloudflare + Azure bug [non-streaming] 2024-01-04 10:24:51 +05:30
Krrish Dholakia
a37a18ca80 feat(router.py): add support for retry/fallbacks for async embedding calls 2024-01-02 11:54:28 +05:30
Krrish Dholakia
c12e3bd565 fix(router.py): fix model name passed through 2024-01-02 11:15:30 +05:30
Krrish Dholakia
dff4c172d0 refactor(test_router_caching.py): move tpm/rpm routing tests to separate file 2024-01-02 11:10:11 +05:30
Krrish Dholakia
a83e2e07cf fix(router.py): correctly raise no model available error
https://github.com/BerriAI/litellm/issues/1289
2024-01-01 21:22:42 +05:30
Krrish Dholakia
027218c3f0 test(test_lowest_latency_routing.py): add more tests 2023-12-30 17:41:42 +05:30
Krrish Dholakia
f2d0d5584a fix(router.py): fix latency based routing 2023-12-30 17:25:40 +05:30
Krrish Dholakia
69935db239 fix(router.py): periodically re-initialize azure/openai clients to solve max conn issue 2023-12-30 15:48:34 +05:30
Krrish Dholakia
b66cf0aa43 fix(lowest_tpm_rpm_routing.py): broaden scope of get deployment logic 2023-12-30 13:27:50 +05:30
Krrish Dholakia
38f55249e1 fix(router.py): support retry and fallbacks for atext_completion 2023-12-30 11:19:32 +05:30
ishaan-jaff
459ba5b45e (feat) router, add ModelResponse type hints 2023-12-30 10:44:13 +05:30
Krrish Dholakia
a34de56289 fix(router.py): handle initial scenario for tpm/rpm routing 2023-12-30 07:28:45 +05:30