Krrish Dholakia
|
2267aeb803
|
fix(router.py): create a semaphore for each deployment with rpm
run semaphore logic for each deployment with rpm
|
2024-04-12 18:03:23 -07:00 |
|
Krrish Dholakia
|
a4e415b23c
|
fix(router.py): initial commit for semaphores on router
|
2024-04-12 17:59:05 -07:00 |
|
Mikkel Gravgaard
|
759414004a
|
Default model_name to None in _aembedding
|
2024-04-12 11:33:03 +02:00 |
|
Ishaan Jaff
|
921f6d307e
|
fix - stop printing api_key in debug mode
|
2024-04-11 15:05:22 -07:00 |
|
Krrish Dholakia
|
84d43484c6
|
fix(router.py): make sure pre call rpm check runs even when model not in model cost map
|
2024-04-11 09:27:46 -07:00 |
|
Krrish Dholakia
|
266dba65e7
|
fix(router.py): handle 1 deployment being picked
|
2024-04-10 18:32:54 -07:00 |
|
Krrish Dholakia
|
52462e8bac
|
fix(router.py): move specific deployment check outside common functions
|
2024-04-10 18:06:31 -07:00 |
|
Krrish Dholakia
|
37ac17aebd
|
fix(router.py): fix datetime object
|
2024-04-10 17:55:24 -07:00 |
|
Krrish Dholakia
|
2531701a2a
|
fix(router.py): make get_cooldown_deployment logic async
|
2024-04-10 16:57:01 -07:00 |
|
Krrish Dholakia
|
a47a719caa
|
fix(router.py): generate consistent model id's
having the same id for a deployment, lets redis usage caching work across multiple instances
|
2024-04-10 15:23:57 -07:00 |
|
Krrish Dholakia
|
180cf9bd5c
|
feat(lowest_tpm_rpm_v2.py): move to using redis.incr and redis.mget for getting model usage from redis
makes routing work across multiple instances
|
2024-04-10 14:56:23 -07:00 |
|
Krrish Dholakia
|
b2741933dc
|
fix(proxy_cli.py): don't double load the router config
was causing callbacks to be instantiated twice - double couting usage in cache
|
2024-04-10 13:23:56 -07:00 |
|
Krish Dholakia
|
83f608dc5d
|
Merge pull request #2880 from BerriAI/litellm_api_base_alerting
feat(proxy/utils.py): return api base for request hanging alerts
|
2024-04-06 19:17:18 -07:00 |
|
Krrish Dholakia
|
460546956d
|
fix(utils.py): fix import
|
2024-04-06 18:37:38 -07:00 |
|
Krrish Dholakia
|
6f94f3d127
|
fix(router.py): improve pre-call check -> get model group cache one-time
|
2024-04-06 18:24:51 -07:00 |
|
Krrish Dholakia
|
7ae6432f94
|
fix(router.py): check usage based routing cache in pre-call check
allows pre-call rpm check to work across instances
|
2024-04-06 18:19:02 -07:00 |
|
Krrish Dholakia
|
205ac1496a
|
fix(router.py): store in-memory deployment request count for 60s only
|
2024-04-06 17:53:39 -07:00 |
|
Krrish Dholakia
|
0d1cca9aa0
|
fix(router.py): make router async calls coroutine safe
uses pre-call checks to check if a call is below it's rpm limit, works even if multiple async calls are
made simultaneously
|
2024-04-06 17:31:26 -07:00 |
|
Krrish Dholakia
|
6110d32b1c
|
feat(proxy/utils.py): return api base for request hanging alerts
|
2024-04-06 15:58:53 -07:00 |
|
Ishaan Jaff
|
a6bc673ffa
|
feat - re-use OpenAI client for azure text
|
2024-04-06 12:23:58 -07:00 |
|
Ishaan Jaff
|
01fef1a9f8
|
feat - re-use openai client for text completion
|
2024-04-06 11:28:25 -07:00 |
|
Ishaan Jaff
|
faa0d38087
|
Merge pull request #2868 from BerriAI/litellm_add_command_r_on_proxy
Add Azure Command-r-plus on litellm proxy
|
2024-04-05 15:13:47 -07:00 |
|
Ishaan Jaff
|
9055a071e6
|
proxy - add azure/command r
|
2024-04-05 14:35:31 -07:00 |
|
Krrish Dholakia
|
e3c2bdef4d
|
feat(ui): add models via ui
adds ability to add models via ui to the proxy. also fixes additional bugs around new /model/new endpoint
|
2024-04-04 18:56:20 -07:00 |
|
Krrish Dholakia
|
2236f283fe
|
fix(router.py): handle id being passed in as int
|
2024-04-04 14:23:10 -07:00 |
|
Krrish Dholakia
|
b9030be792
|
test(test_router.py): fix test to check type
|
2024-04-04 11:45:12 -07:00 |
|
Krrish Dholakia
|
20849cbbfc
|
fix(router.py): fix pydantic object logic
|
2024-04-03 21:57:19 -07:00 |
|
Krrish Dholakia
|
f536fb13e6
|
fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances
https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
|
2024-04-03 20:16:41 -07:00 |
|
Ishaan Jaff
|
92984a1c6f
|
Merge pull request #2788 from BerriAI/litellm_support_-_models
[Feat] Allow using model = * on proxy config.yaml
|
2024-04-01 19:46:50 -07:00 |
|
Ishaan Jaff
|
aabd7eff1f
|
feat router allow * models
|
2024-04-01 19:00:24 -07:00 |
|
Krrish Dholakia
|
a917fadf45
|
docs(routing.md): refactor docs to show how to use pre-call checks and fallback across model groups
|
2024-04-01 11:21:27 -07:00 |
|
Krrish Dholakia
|
52b1538b2e
|
fix(router.py): support context window fallbacks for pre-call checks
|
2024-04-01 10:51:54 -07:00 |
|
Krrish Dholakia
|
f46a9d09a5
|
fix(router.py): fix check for context window fallbacks
fallback if list is not none
|
2024-04-01 10:41:12 -07:00 |
|
Krrish Dholakia
|
49e8cdbff9
|
fix(router.py): check for context window error when handling 400 status code errors
was causing proxy context window fallbacks to not work as expected
|
2024-03-26 08:08:15 -07:00 |
|
Krrish Dholakia
|
f98aead602
|
feat(main.py): support router.chat.completions.create
allows using router with instructor
https://github.com/BerriAI/litellm/issues/2673
|
2024-03-25 08:26:28 -07:00 |
|
Krrish Dholakia
|
e8e7964025
|
docs(routing.md): add pre-call checks to docs
|
2024-03-23 19:10:34 -07:00 |
|
Krrish Dholakia
|
b7321ae4ee
|
fix(router.py): fix pre call check logic
|
2024-03-23 18:56:08 -07:00 |
|
Krrish Dholakia
|
eb3ca85d7e
|
feat(router.py): enable pre-call checks
filter models outside of context window limits of a given message for a model group
https://github.com/BerriAI/litellm/issues/872
|
2024-03-23 18:03:30 -07:00 |
|
Krrish Dholakia
|
1ba21a8c58
|
fix(router.py): add no-proxy support for router
|
2024-03-14 14:25:30 -07:00 |
|
ishaan-jaff
|
aaa008ecde
|
(fix) raising No healthy deployment
|
2024-03-13 08:00:56 -07:00 |
|
Ishaan Jaff
|
cd8f25f6f8
|
Merge branch 'main' into litellm_imp_mem_use
|
2024-03-11 19:00:56 -07:00 |
|
Ishaan Jaff
|
881063c424
|
Merge pull request #2461 from BerriAI/litellm_improve_mem_use
LiteLLM - improve memory utilization - don't use inMemCache on Router
|
2024-03-11 18:59:57 -07:00 |
|
ishaan-jaff
|
eae1710c4b
|
(fix) mem usage router.py
|
2024-03-11 16:52:06 -07:00 |
|
ishaan-jaff
|
1bd3bb1128
|
(fix) improve mem util
|
2024-03-11 16:22:04 -07:00 |
|
Krrish Dholakia
|
9735250db7
|
fix(router.py): support fallbacks / retries with sync embedding calls
|
2024-03-11 14:51:22 -07:00 |
|
Krrish Dholakia
|
2f1899284c
|
fix(router.py): add more debug logs
|
2024-03-11 12:34:35 -07:00 |
|
Ishaan Jaff
|
a1784284bb
|
Merge pull request #2416 from BerriAI/litellm_use_consistent_port
(docs) LiteLLM Proxy - use port 4000 in examples
|
2024-03-09 16:32:08 -08:00 |
|
ishaan-jaff
|
ea6f42216c
|
(docs) use port 4000
|
2024-03-08 21:59:00 -08:00 |
|
Krrish Dholakia
|
fe125a5131
|
test(test_whisper.py): add testing for load balancing whisper endpoints on router
|
2024-03-08 14:19:37 -08:00 |
|
Krrish Dholakia
|
ae54b398d2
|
feat(router.py): add load balancing for async transcription calls
|
2024-03-08 13:58:15 -08:00 |
|