Krrish Dholakia
|
49e8cdbff9
|
fix(router.py): check for context window error when handling 400 status code errors
was causing proxy context window fallbacks to not work as expected
|
2024-03-26 08:08:15 -07:00 |
|
Krrish Dholakia
|
f98aead602
|
feat(main.py): support router.chat.completions.create
allows using router with instructor
https://github.com/BerriAI/litellm/issues/2673
|
2024-03-25 08:26:28 -07:00 |
|
Krrish Dholakia
|
e8e7964025
|
docs(routing.md): add pre-call checks to docs
|
2024-03-23 19:10:34 -07:00 |
|
Krrish Dholakia
|
b7321ae4ee
|
fix(router.py): fix pre call check logic
|
2024-03-23 18:56:08 -07:00 |
|
Krrish Dholakia
|
eb3ca85d7e
|
feat(router.py): enable pre-call checks
filter models outside of context window limits of a given message for a model group
https://github.com/BerriAI/litellm/issues/872
|
2024-03-23 18:03:30 -07:00 |
|
Krrish Dholakia
|
1ba21a8c58
|
fix(router.py): add no-proxy support for router
|
2024-03-14 14:25:30 -07:00 |
|
ishaan-jaff
|
aaa008ecde
|
(fix) raising No healthy deployment
|
2024-03-13 08:00:56 -07:00 |
|
Ishaan Jaff
|
cd8f25f6f8
|
Merge branch 'main' into litellm_imp_mem_use
|
2024-03-11 19:00:56 -07:00 |
|
Ishaan Jaff
|
881063c424
|
Merge pull request #2461 from BerriAI/litellm_improve_mem_use
LiteLLM - improve memory utilization - don't use inMemCache on Router
|
2024-03-11 18:59:57 -07:00 |
|
ishaan-jaff
|
eae1710c4b
|
(fix) mem usage router.py
|
2024-03-11 16:52:06 -07:00 |
|
ishaan-jaff
|
1bd3bb1128
|
(fix) improve mem util
|
2024-03-11 16:22:04 -07:00 |
|
Krrish Dholakia
|
9735250db7
|
fix(router.py): support fallbacks / retries with sync embedding calls
|
2024-03-11 14:51:22 -07:00 |
|
Krrish Dholakia
|
2f1899284c
|
fix(router.py): add more debug logs
|
2024-03-11 12:34:35 -07:00 |
|
Ishaan Jaff
|
a1784284bb
|
Merge pull request #2416 from BerriAI/litellm_use_consistent_port
(docs) LiteLLM Proxy - use port 4000 in examples
|
2024-03-09 16:32:08 -08:00 |
|
ishaan-jaff
|
ea6f42216c
|
(docs) use port 4000
|
2024-03-08 21:59:00 -08:00 |
|
Krrish Dholakia
|
fe125a5131
|
test(test_whisper.py): add testing for load balancing whisper endpoints on router
|
2024-03-08 14:19:37 -08:00 |
|
Krrish Dholakia
|
ae54b398d2
|
feat(router.py): add load balancing for async transcription calls
|
2024-03-08 13:58:15 -08:00 |
|
ishaan-jaff
|
86ac020b12
|
(fix) show latency per deployment on router debug logs
|
2024-03-07 18:50:45 -08:00 |
|
ishaan-jaff
|
6f0faca85b
|
(feat) print debug info per deployment
|
2024-03-07 18:33:09 -08:00 |
|
Krrish Dholakia
|
7cb86c4e0f
|
fix(router.py): fix text completion error logging
|
2024-02-24 10:46:59 -08:00 |
|
Krrish Dholakia
|
2a0d2dbdf9
|
fix(router.py): mask the api key in debug statements on router
|
2024-02-21 18:13:03 -08:00 |
|
Krrish Dholakia
|
5a0f962beb
|
fix(router.py): fix debug log
|
2024-02-21 08:45:42 -08:00 |
|
ishaan-jaff
|
693efc8e84
|
(feat) add moderation on router
|
2024-02-14 11:00:09 -08:00 |
|
ishaan-jaff
|
3d97004b15
|
(feat) support timeout on bedrock
|
2024-02-09 17:42:17 -08:00 |
|
ishaan-jaff
|
920d684da4
|
(feat) log model_info in router metadata
|
2024-02-07 15:44:28 -08:00 |
|
Krish Dholakia
|
2d5e639a09
|
Merge branch 'main' into litellm_http_proxy_support
|
2024-02-01 09:18:50 -08:00 |
|
Krrish Dholakia
|
a07f3ec2d4
|
fix(router.py): remove wrapping of router.completion() let clients handle this
|
2024-01-30 21:12:41 -08:00 |
|
ishaan-jaff
|
e011c4a989
|
(fix) use OpenAI organization in ahealth_check
|
2024-01-30 11:45:22 -08:00 |
|
ishaan-jaff
|
7fe8fff5d8
|
(router) set organization OpenAI
|
2024-01-30 10:54:05 -08:00 |
|
Ishaan Jaff
|
5e72d1901b
|
Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
[Feat] Litellm.Router set custom cooldown times
|
2024-01-23 08:05:59 -08:00 |
|
ishaan-jaff
|
24358a2a3e
|
(fix) router - update model_group on fallback
|
2024-01-23 08:04:29 -08:00 |
|
ishaan-jaff
|
22e26fcc4b
|
(fix) revert router.py to stable version
|
2024-01-23 08:03:29 -08:00 |
|
ishaan-jaff
|
b4cc227d1c
|
Revert "(feat) add typehints for litellm.acompletion"
This reverts commit a9cf6cec80 .
|
2024-01-23 07:57:09 -08:00 |
|
ishaan-jaff
|
0124de558d
|
Revert "v0"
This reverts commit b730482aaf .
|
2024-01-23 07:54:02 -08:00 |
|
Krrish Dholakia
|
1e3f14837b
|
fix(router.py): fix dereferencing param order
|
2024-01-23 07:19:37 -08:00 |
|
Krrish Dholakia
|
53b879bc6c
|
fix(router.py): ensure no unsupported args are passed to completion()
|
2024-01-22 22:33:06 -08:00 |
|
Krrish Dholakia
|
f19f0dad89
|
fix(router.py): fix client init
|
2024-01-22 22:15:39 -08:00 |
|
Krrish Dholakia
|
5e0d99b2ef
|
fix(router.py): fix order of dereferenced dictionaries
|
2024-01-22 21:42:25 -08:00 |
|
ishaan-jaff
|
14585c9966
|
(fix) router - update model_group on fallback
|
2024-01-22 14:41:55 -08:00 |
|
Ishaan Jaff
|
435d4b9279
|
Merge pull request #1534 from BerriAI/litellm_custom_cooldown_times
[Feat] Litellm.Router set custom cooldown times
|
2024-01-19 20:49:17 -08:00 |
|
ishaan-jaff
|
84684c50fa
|
(fix) router - timeout exception mapping
|
2024-01-19 20:30:41 -08:00 |
|
ishaan-jaff
|
16b688d1ff
|
(feat) router - set custom cooldown times
|
2024-01-19 19:43:41 -08:00 |
|
ishaan-jaff
|
91e57bd039
|
(fix) add router typehints
|
2024-01-19 16:32:43 -08:00 |
|
ishaan-jaff
|
a9cf6cec80
|
(feat) add typehints for litellm.acompletion
|
2024-01-19 16:05:26 -08:00 |
|
ishaan-jaff
|
b730482aaf
|
v0
|
2024-01-19 15:49:37 -08:00 |
|
ishaan-jaff
|
8c0b7b1015
|
(feat) - improve router logging/debugging messages
|
2024-01-19 13:57:33 -08:00 |
|
ishaan-jaff
|
7b2c15aa51
|
(feat) improve litellm.Router logging
|
2024-01-19 12:28:51 -08:00 |
|
Krrish Dholakia
|
8873fe9049
|
fix(router.py): support http and https proxys
|
2024-01-18 09:58:41 -08:00 |
|
ishaan-jaff
|
79c412cab5
|
(feat) set Azure vision enhancement params using os.environ
|
2024-01-17 21:23:40 -08:00 |
|
ishaan-jaff
|
0c4b86c211
|
(feat) litellm router - Azure, use base_url when set
|
2024-01-17 10:24:30 -08:00 |
|