Krrish Dholakia
|
200e8784f3
|
fix(proxy_server.py): fix delete models endpoint
https://github.com/BerriAI/litellm/issues/2951
|
2024-04-15 18:34:58 -07:00 |
|
Krrish Dholakia
|
7179bf753a
|
build(ui): add vertex ai models via ui
|
2024-04-15 15:59:36 -07:00 |
|
Krish Dholakia
|
0bc7c98265
|
Merge pull request #2981 from grav/grav/default_model_name_to_none
Default model_name to None in _aembedding
|
2024-04-15 14:45:01 -07:00 |
|
Krrish Dholakia
|
9c183fcd9f
|
fix(proxy_server.py): return none if no model list set in router
https://github.com/BerriAI/litellm/issues/2979
|
2024-04-15 09:02:18 -07:00 |
|
Krrish Dholakia
|
c177407f7b
|
test(test_openai_endpoints.py): add concurrency testing for user defined rate limits on proxy
|
2024-04-12 18:56:13 -07:00 |
|
Krrish Dholakia
|
d9b8f63e86
|
fix(router.py): support pre_call_rpm_check for lowest_tpm_rpm_v2 routing
have routing strategies expose an ‘update rpm’ function; for checking + updating rpm pre call
|
2024-04-12 18:25:14 -07:00 |
|
Krrish Dholakia
|
5f1fcaad6d
|
fix(router.py): create a semaphore for each deployment with rpm
run semaphore logic for each deployment with rpm
|
2024-04-12 18:03:23 -07:00 |
|
Krrish Dholakia
|
87c621d726
|
fix(router.py): initial commit for semaphores on router
|
2024-04-12 17:59:05 -07:00 |
|
Mikkel Gravgaard
|
c3a8f9a447
|
Default model_name to None in _aembedding
|
2024-04-12 11:33:03 +02:00 |
|
Ishaan Jaff
|
0d8063ee49
|
fix - stop printing api_key in debug mode
|
2024-04-11 15:05:22 -07:00 |
|
Krrish Dholakia
|
0863c10b0b
|
fix(router.py): make sure pre call rpm check runs even when model not in model cost map
|
2024-04-11 09:27:46 -07:00 |
|
Krrish Dholakia
|
f5ed34f801
|
fix(router.py): handle 1 deployment being picked
|
2024-04-10 18:32:54 -07:00 |
|
Krrish Dholakia
|
5744d17086
|
fix(router.py): move specific deployment check outside common functions
|
2024-04-10 18:06:31 -07:00 |
|
Krrish Dholakia
|
8f06c2d8c4
|
fix(router.py): fix datetime object
|
2024-04-10 17:55:24 -07:00 |
|
Krrish Dholakia
|
384245e331
|
fix(router.py): make get_cooldown_deployment logic async
|
2024-04-10 16:57:01 -07:00 |
|
Krrish Dholakia
|
f5206d592a
|
fix(router.py): generate consistent model id's
having the same id for a deployment, lets redis usage caching work across multiple instances
|
2024-04-10 15:23:57 -07:00 |
|
Krrish Dholakia
|
31e2d4e6d1
|
feat(lowest_tpm_rpm_v2.py): move to using redis.incr and redis.mget for getting model usage from redis
makes routing work across multiple instances
|
2024-04-10 14:56:23 -07:00 |
|
Krrish Dholakia
|
06a0ca1e80
|
fix(proxy_cli.py): don't double load the router config
was causing callbacks to be instantiated twice - double couting usage in cache
|
2024-04-10 13:23:56 -07:00 |
|
Krish Dholakia
|
1060244a7f
|
Merge pull request #2880 from BerriAI/litellm_api_base_alerting
feat(proxy/utils.py): return api base for request hanging alerts
|
2024-04-06 19:17:18 -07:00 |
|
Krrish Dholakia
|
fd67dc7556
|
fix(utils.py): fix import
|
2024-04-06 18:37:38 -07:00 |
|
Krrish Dholakia
|
9b2b6b42c5
|
fix(router.py): improve pre-call check -> get model group cache one-time
|
2024-04-06 18:24:51 -07:00 |
|
Krrish Dholakia
|
504936b83e
|
fix(router.py): check usage based routing cache in pre-call check
allows pre-call rpm check to work across instances
|
2024-04-06 18:19:02 -07:00 |
|
Krrish Dholakia
|
3cd1b8d458
|
fix(router.py): store in-memory deployment request count for 60s only
|
2024-04-06 17:53:39 -07:00 |
|
Krrish Dholakia
|
8d38b3bfc4
|
fix(router.py): make router async calls coroutine safe
uses pre-call checks to check if a call is below it's rpm limit, works even if multiple async calls are
made simultaneously
|
2024-04-06 17:31:26 -07:00 |
|
Krrish Dholakia
|
0dad78b53c
|
feat(proxy/utils.py): return api base for request hanging alerts
|
2024-04-06 15:58:53 -07:00 |
|
Ishaan Jaff
|
515740f75c
|
feat - re-use OpenAI client for azure text
|
2024-04-06 12:23:58 -07:00 |
|
Ishaan Jaff
|
0d7e2aaab7
|
feat - re-use openai client for text completion
|
2024-04-06 11:28:25 -07:00 |
|
Ishaan Jaff
|
72fddabf84
|
Merge pull request #2868 from BerriAI/litellm_add_command_r_on_proxy
Add Azure Command-r-plus on litellm proxy
|
2024-04-05 15:13:47 -07:00 |
|
Ishaan Jaff
|
5c1a662caa
|
proxy - add azure/command r
|
2024-04-05 14:35:31 -07:00 |
|
Krrish Dholakia
|
ece37a4b7f
|
feat(ui): add models via ui
adds ability to add models via ui to the proxy. also fixes additional bugs around new /model/new endpoint
|
2024-04-04 18:56:20 -07:00 |
|
Krrish Dholakia
|
48a5948081
|
fix(router.py): handle id being passed in as int
|
2024-04-04 14:23:10 -07:00 |
|
Krrish Dholakia
|
0294c3f8a9
|
test(test_router.py): fix test to check type
|
2024-04-04 11:45:12 -07:00 |
|
Krrish Dholakia
|
a4a8129a13
|
fix(router.py): fix pydantic object logic
|
2024-04-03 21:57:19 -07:00 |
|
Krrish Dholakia
|
129bb52e9d
|
fix(proxy_server.py): persist models added via /model/new to db
allows models to be used across instances
https://github.com/BerriAI/litellm/issues/2319 , https://github.com/BerriAI/litellm/issues/2329
|
2024-04-03 20:16:41 -07:00 |
|
Ishaan Jaff
|
c2b9799e42
|
Merge pull request #2788 from BerriAI/litellm_support_-_models
[Feat] Allow using model = * on proxy config.yaml
|
2024-04-01 19:46:50 -07:00 |
|
Ishaan Jaff
|
8c75bb5f0f
|
feat router allow * models
|
2024-04-01 19:00:24 -07:00 |
|
Krrish Dholakia
|
0072174ef9
|
docs(routing.md): refactor docs to show how to use pre-call checks and fallback across model groups
|
2024-04-01 11:21:27 -07:00 |
|
Krrish Dholakia
|
b2b8375987
|
fix(router.py): support context window fallbacks for pre-call checks
|
2024-04-01 10:51:54 -07:00 |
|
Krrish Dholakia
|
fb1de8b5e0
|
fix(router.py): fix check for context window fallbacks
fallback if list is not none
|
2024-04-01 10:41:12 -07:00 |
|
Krrish Dholakia
|
00d27a324d
|
fix(router.py): check for context window error when handling 400 status code errors
was causing proxy context window fallbacks to not work as expected
|
2024-03-26 08:08:15 -07:00 |
|
Krrish Dholakia
|
8821b3d243
|
feat(main.py): support router.chat.completions.create
allows using router with instructor
https://github.com/BerriAI/litellm/issues/2673
|
2024-03-25 08:26:28 -07:00 |
|
Krrish Dholakia
|
8c6402b02d
|
docs(routing.md): add pre-call checks to docs
|
2024-03-23 19:10:34 -07:00 |
|
Krrish Dholakia
|
292cdd81e4
|
fix(router.py): fix pre call check logic
|
2024-03-23 18:56:08 -07:00 |
|
Krrish Dholakia
|
4e70a3e09a
|
feat(router.py): enable pre-call checks
filter models outside of context window limits of a given message for a model group
https://github.com/BerriAI/litellm/issues/872
|
2024-03-23 18:03:30 -07:00 |
|
Krrish Dholakia
|
0bbc8ac4ad
|
fix(router.py): add no-proxy support for router
|
2024-03-14 14:25:30 -07:00 |
|
ishaan-jaff
|
ef7fbcf617
|
(fix) raising No healthy deployment
|
2024-03-13 08:00:56 -07:00 |
|
Ishaan Jaff
|
89ef2023e9
|
Merge branch 'main' into litellm_imp_mem_use
|
2024-03-11 19:00:56 -07:00 |
|
Ishaan Jaff
|
fa655d62fb
|
Merge pull request #2461 from BerriAI/litellm_improve_mem_use
LiteLLM - improve memory utilization - don't use inMemCache on Router
|
2024-03-11 18:59:57 -07:00 |
|
ishaan-jaff
|
39299d3aa7
|
(fix) mem usage router.py
|
2024-03-11 16:52:06 -07:00 |
|
ishaan-jaff
|
b617263860
|
(fix) improve mem util
|
2024-03-11 16:22:04 -07:00 |
|