Commit graph

455 commits

Author SHA1 Message Date
Krish Dholakia
1060244a7f Merge pull request #2880 from BerriAI/litellm_api_base_alerting
feat(proxy/utils.py): return api base for request hanging alerts
2024-04-06 19:17:18 -07:00
Krrish Dholakia
fd67dc7556 fix(utils.py): fix import 2024-04-06 18:37:38 -07:00
Krrish Dholakia
9b2b6b42c5 fix(router.py): improve pre-call check -> get model group cache one-time 2024-04-06 18:24:51 -07:00
Krrish Dholakia
504936b83e fix(router.py): check usage based routing cache in pre-call check
allows pre-call rpm check to work across instances
2024-04-06 18:19:02 -07:00
Krrish Dholakia
3cd1b8d458 fix(router.py): store in-memory deployment request count for 60s only 2024-04-06 17:53:39 -07:00
Krrish Dholakia
8d38b3bfc4 fix(router.py): make router async calls coroutine safe
uses pre-call checks to check if a call is below it's rpm limit, works even if multiple async calls are
made simultaneously
2024-04-06 17:31:26 -07:00
Krrish Dholakia
0dad78b53c feat(proxy/utils.py): return api base for request hanging alerts 2024-04-06 15:58:53 -07:00
Ishaan Jaff
515740f75c feat - re-use OpenAI client for azure text 2024-04-06 12:23:58 -07:00
Ishaan Jaff
0d7e2aaab7 feat - re-use openai client for text completion 2024-04-06 11:28:25 -07:00
Ishaan Jaff
72fddabf84 Merge pull request #2868 from BerriAI/litellm_add_command_r_on_proxy
Add Azure Command-r-plus on litellm proxy
2024-04-05 15:13:47 -07:00
Ishaan Jaff
5c1a662caa proxy - add azure/command r 2024-04-05 14:35:31 -07:00
Krrish Dholakia
ece37a4b7f feat(ui): add models via ui
adds ability to add models via ui to the proxy. also fixes additional bugs around new /model/new endpoint
2024-04-04 18:56:20 -07:00
Krrish Dholakia
48a5948081 fix(router.py): handle id being passed in as int 2024-04-04 14:23:10 -07:00
Krrish Dholakia
0294c3f8a9 test(test_router.py): fix test to check type 2024-04-04 11:45:12 -07:00
Krrish Dholakia
a4a8129a13 fix(router.py): fix pydantic object logic 2024-04-03 21:57:19 -07:00
Krrish Dholakia
129bb52e9d fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances

https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
2024-04-03 20:16:41 -07:00
Ishaan Jaff
c2b9799e42 Merge pull request #2788 from BerriAI/litellm_support_-_models
[Feat] Allow using model = * on proxy config.yaml
2024-04-01 19:46:50 -07:00
Ishaan Jaff
8c75bb5f0f feat router allow * models 2024-04-01 19:00:24 -07:00
Krrish Dholakia
0072174ef9 docs(routing.md): refactor docs to show how to use pre-call checks and fallback across model groups 2024-04-01 11:21:27 -07:00
Krrish Dholakia
b2b8375987 fix(router.py): support context window fallbacks for pre-call checks 2024-04-01 10:51:54 -07:00
Krrish Dholakia
fb1de8b5e0 fix(router.py): fix check for context window fallbacks
fallback if list is not none
2024-04-01 10:41:12 -07:00
Krrish Dholakia
00d27a324d fix(router.py): check for context window error when handling 400 status code errors
was causing proxy context window fallbacks to not work as expected
2024-03-26 08:08:15 -07:00
Krrish Dholakia
8821b3d243 feat(main.py): support router.chat.completions.create
allows using router with instructor

https://github.com/BerriAI/litellm/issues/2673
2024-03-25 08:26:28 -07:00
Krrish Dholakia
8c6402b02d docs(routing.md): add pre-call checks to docs 2024-03-23 19:10:34 -07:00
Krrish Dholakia
292cdd81e4 fix(router.py): fix pre call check logic 2024-03-23 18:56:08 -07:00
Krrish Dholakia
4e70a3e09a feat(router.py): enable pre-call checks
filter models outside of context window limits of a given message for a model group

 https://github.com/BerriAI/litellm/issues/872
2024-03-23 18:03:30 -07:00
Krrish Dholakia
0bbc8ac4ad fix(router.py): add no-proxy support for router 2024-03-14 14:25:30 -07:00
ishaan-jaff
ef7fbcf617 (fix) raising No healthy deployment 2024-03-13 08:00:56 -07:00
Ishaan Jaff
89ef2023e9 Merge branch 'main' into litellm_imp_mem_use 2024-03-11 19:00:56 -07:00
Ishaan Jaff
fa655d62fb Merge pull request #2461 from BerriAI/litellm_improve_mem_use
LiteLLM -  improve memory utilization - don't use inMemCache on Router
2024-03-11 18:59:57 -07:00
ishaan-jaff
39299d3aa7 (fix) mem usage router.py 2024-03-11 16:52:06 -07:00
ishaan-jaff
b617263860 (fix) improve mem util 2024-03-11 16:22:04 -07:00
Krrish Dholakia
03e8ce938b fix(router.py): support fallbacks / retries with sync embedding calls 2024-03-11 14:51:22 -07:00
Krrish Dholakia
a97e8a9029 fix(router.py): add more debug logs 2024-03-11 12:34:35 -07:00
Ishaan Jaff
3f520d8c93 Merge pull request #2416 from BerriAI/litellm_use_consistent_port
(docs) LiteLLM Proxy - use port 4000 in examples
2024-03-09 16:32:08 -08:00
ishaan-jaff
9e1d089770 (docs) use port 4000 2024-03-08 21:59:00 -08:00
Krrish Dholakia
aca37d3bc5 test(test_whisper.py): add testing for load balancing whisper endpoints on router 2024-03-08 14:19:37 -08:00
Krrish Dholakia
93e9781d37 feat(router.py): add load balancing for async transcription calls 2024-03-08 13:58:15 -08:00
ishaan-jaff
f1cc47e6dc (fix) show latency per deployment on router debug logs 2024-03-07 18:50:45 -08:00
ishaan-jaff
db002315e3 (feat) print debug info per deployment 2024-03-07 18:33:09 -08:00
Krrish Dholakia
bcfb113b22 fix(router.py): fix text completion error logging 2024-02-24 10:46:59 -08:00
Krrish Dholakia
21f2d9ce59 fix(router.py): mask the api key in debug statements on router 2024-02-21 18:13:03 -08:00
Krrish Dholakia
2796f1c61c fix(router.py): fix debug log 2024-02-21 08:45:42 -08:00
ishaan-jaff
6c1c2e8c7d (feat) add moderation on router 2024-02-14 11:00:09 -08:00
ishaan-jaff
cd9005d6e6 (feat) support timeout on bedrock 2024-02-09 17:42:17 -08:00
ishaan-jaff
d5af088f12 (feat) log model_info in router metadata 2024-02-07 15:44:28 -08:00
Krish Dholakia
058813da76 Merge branch 'main' into litellm_http_proxy_support 2024-02-01 09:18:50 -08:00
Krrish Dholakia
96c630a740 fix(router.py): remove wrapping of router.completion() let clients handle this 2024-01-30 21:12:41 -08:00
ishaan-jaff
2806a2e99f (fix) use OpenAI organization in ahealth_check 2024-01-30 11:45:22 -08:00
ishaan-jaff
463ad30d84 (router) set organization OpenAI 2024-01-30 10:54:05 -08:00