Commit graph

681 commits

Author SHA1 Message Date
Krrish Dholakia
81573b2dd9 fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
Ishaan Jaff
67d356b933 fix - show api base on hanging requests 2024-04-18 20:58:02 -07:00
Krrish Dholakia
2ffd057042 test(test_models.py): ensure only admin can call /health 2024-04-16 18:13:40 -07:00
Ishaan Jaff
caec0a3938 fix - make router set_settings non blocking 2024-04-16 18:04:21 -07:00
Ishaan Jaff
7e99854d05
Merge pull request #3079 from BerriAI/litellm_router_save_settings_ui
UI - Save / Edit router settings UI
2024-04-16 16:57:42 -07:00
Ishaan Jaff
59b154f152 feat - update router settings on Admin UI 2024-04-16 15:36:26 -07:00
Krrish Dholakia
13cd252f3e fix(proxy_server.py): ensure id used in delete deployment matches id used in litellm Router 2024-04-16 15:17:18 -07:00
Ishaan Jaff
e271ce8030 router - get settings 2024-04-16 14:22:54 -07:00
Krrish Dholakia
2d4fe072ad fix(proxy_server.py): fix delete models endpoint
https://github.com/BerriAI/litellm/issues/2951
2024-04-15 18:34:58 -07:00
Krrish Dholakia
e4bcc51e44 build(ui): add vertex ai models via ui 2024-04-15 15:59:36 -07:00
Krish Dholakia
0d2a75d301
Merge pull request #2981 from grav/grav/default_model_name_to_none
Default model_name to None in _aembedding
2024-04-15 14:45:01 -07:00
Krrish Dholakia
43c37c31ea fix(proxy_server.py): return none if no model list set in router
https://github.com/BerriAI/litellm/issues/2979
2024-04-15 09:02:18 -07:00
Krrish Dholakia
ea1574c160 test(test_openai_endpoints.py): add concurrency testing for user defined rate limits on proxy 2024-04-12 18:56:13 -07:00
Krrish Dholakia
c03b0bbb24 fix(router.py): support pre_call_rpm_check for lowest_tpm_rpm_v2 routing
have routing strategies expose an ‘update rpm’ function; for checking + updating rpm pre call
2024-04-12 18:25:14 -07:00
Krrish Dholakia
2267aeb803 fix(router.py): create a semaphore for each deployment with rpm
run semaphore logic for each deployment with rpm
2024-04-12 18:03:23 -07:00
Krrish Dholakia
a4e415b23c fix(router.py): initial commit for semaphores on router 2024-04-12 17:59:05 -07:00
Mikkel Gravgaard
759414004a Default model_name to None in _aembedding 2024-04-12 11:33:03 +02:00
Ishaan Jaff
921f6d307e fix - stop printing api_key in debug mode 2024-04-11 15:05:22 -07:00
Krrish Dholakia
84d43484c6 fix(router.py): make sure pre call rpm check runs even when model not in model cost map 2024-04-11 09:27:46 -07:00
Krrish Dholakia
266dba65e7 fix(router.py): handle 1 deployment being picked 2024-04-10 18:32:54 -07:00
Krrish Dholakia
52462e8bac fix(router.py): move specific deployment check outside common functions 2024-04-10 18:06:31 -07:00
Krrish Dholakia
37ac17aebd fix(router.py): fix datetime object 2024-04-10 17:55:24 -07:00
Krrish Dholakia
2531701a2a fix(router.py): make get_cooldown_deployment logic async 2024-04-10 16:57:01 -07:00
Krrish Dholakia
a47a719caa fix(router.py): generate consistent model id's
having the same id for a deployment, lets redis usage caching work across multiple instances
2024-04-10 15:23:57 -07:00
Krrish Dholakia
180cf9bd5c feat(lowest_tpm_rpm_v2.py): move to using redis.incr and redis.mget for getting model usage from redis
makes routing work across multiple instances
2024-04-10 14:56:23 -07:00
Krrish Dholakia
b2741933dc fix(proxy_cli.py): don't double load the router config
was causing callbacks to be instantiated twice - double couting usage in cache
2024-04-10 13:23:56 -07:00
Krish Dholakia
83f608dc5d
Merge pull request #2880 from BerriAI/litellm_api_base_alerting
feat(proxy/utils.py): return api base for request hanging alerts
2024-04-06 19:17:18 -07:00
Krrish Dholakia
460546956d fix(utils.py): fix import 2024-04-06 18:37:38 -07:00
Krrish Dholakia
6f94f3d127 fix(router.py): improve pre-call check -> get model group cache one-time 2024-04-06 18:24:51 -07:00
Krrish Dholakia
7ae6432f94 fix(router.py): check usage based routing cache in pre-call check
allows pre-call rpm check to work across instances
2024-04-06 18:19:02 -07:00
Krrish Dholakia
205ac1496a fix(router.py): store in-memory deployment request count for 60s only 2024-04-06 17:53:39 -07:00
Krrish Dholakia
0d1cca9aa0 fix(router.py): make router async calls coroutine safe
uses pre-call checks to check if a call is below it's rpm limit, works even if multiple async calls are
made simultaneously
2024-04-06 17:31:26 -07:00
Krrish Dholakia
6110d32b1c feat(proxy/utils.py): return api base for request hanging alerts 2024-04-06 15:58:53 -07:00
Ishaan Jaff
a6bc673ffa feat - re-use OpenAI client for azure text 2024-04-06 12:23:58 -07:00
Ishaan Jaff
01fef1a9f8 feat - re-use openai client for text completion 2024-04-06 11:28:25 -07:00
Ishaan Jaff
faa0d38087
Merge pull request #2868 from BerriAI/litellm_add_command_r_on_proxy
Add Azure Command-r-plus on litellm proxy
2024-04-05 15:13:47 -07:00
Ishaan Jaff
9055a071e6 proxy - add azure/command r 2024-04-05 14:35:31 -07:00
Krrish Dholakia
e3c2bdef4d feat(ui): add models via ui
adds ability to add models via ui to the proxy. also fixes additional bugs around new /model/new endpoint
2024-04-04 18:56:20 -07:00
Krrish Dholakia
2236f283fe fix(router.py): handle id being passed in as int 2024-04-04 14:23:10 -07:00
Krrish Dholakia
b9030be792 test(test_router.py): fix test to check type 2024-04-04 11:45:12 -07:00
Krrish Dholakia
20849cbbfc fix(router.py): fix pydantic object logic 2024-04-03 21:57:19 -07:00
Krrish Dholakia
f536fb13e6 fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances

https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
2024-04-03 20:16:41 -07:00
Ishaan Jaff
92984a1c6f
Merge pull request #2788 from BerriAI/litellm_support_-_models
[Feat] Allow using model = * on proxy config.yaml
2024-04-01 19:46:50 -07:00
Ishaan Jaff
aabd7eff1f feat router allow * models 2024-04-01 19:00:24 -07:00
Krrish Dholakia
a917fadf45 docs(routing.md): refactor docs to show how to use pre-call checks and fallback across model groups 2024-04-01 11:21:27 -07:00
Krrish Dholakia
52b1538b2e fix(router.py): support context window fallbacks for pre-call checks 2024-04-01 10:51:54 -07:00
Krrish Dholakia
f46a9d09a5 fix(router.py): fix check for context window fallbacks
fallback if list is not none
2024-04-01 10:41:12 -07:00
Krrish Dholakia
49e8cdbff9 fix(router.py): check for context window error when handling 400 status code errors
was causing proxy context window fallbacks to not work as expected
2024-03-26 08:08:15 -07:00
Krrish Dholakia
f98aead602 feat(main.py): support router.chat.completions.create
allows using router with instructor

https://github.com/BerriAI/litellm/issues/2673
2024-03-25 08:26:28 -07:00
Krrish Dholakia
e8e7964025 docs(routing.md): add pre-call checks to docs 2024-03-23 19:10:34 -07:00