Krish Dholakia
dad1ad2077
LiteLLM Minor Fixes and Improvements (09/14/2024) ( #5697 )
...
* fix(health_check.py): hide sensitive keys from health check debug information k
* fix(route_llm_request.py): fix proxy model not found error message to indicate how to resolve issue
* fix(vertex_llm_base.py): fix exception message to not log credentials
2024-09-14 10:32:39 -07:00
Ishaan Jaff
0e1d3804ff
refactor vertex endpoints to pass through all routes
2024-08-21 17:08:42 -07:00
Ishaan Jaff
398295116f
inly write model tpm/rpm tracking when user set it
2024-08-18 09:58:09 -07:00
Ishaan Jaff
fa96610bbc
fix async_pre_call_hook in parallel request limiter
2024-08-17 12:42:28 -07:00
Ishaan Jaff
feb8c3c5b4
Merge pull request #5259 from BerriAI/litellm_return_remaining_tokens_in_header
...
[Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers
2024-08-17 12:41:16 -07:00
Ishaan Jaff
ee0f772b5c
feat return rmng tokens for model for api key
2024-08-17 12:35:10 -07:00
Ishaan Jaff
5985c7e933
feat - use commong helper for getting model group
2024-08-17 10:46:04 -07:00
Ishaan Jaff
412d30d362
add litellm-key-remaining-tokens on prometheus
2024-08-17 10:02:20 -07:00
Ishaan Jaff
785482f023
feat add settings for rpm/tpm limits for a model
2024-08-17 09:16:01 -07:00
Ishaan Jaff
1ee33478c9
track rpm/tpm usage per key+model
2024-08-16 18:28:58 -07:00
Krrish Dholakia
61f4b71ef7
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 09:22:47 -07:00
Krrish Dholakia
5d96ff6694
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
...
Fixes https://github.com/BerriAI/litellm/issues/4912
2024-08-02 17:48:53 -07:00
Ishaan Jaff
c4e4b4675c
fix raise better error when crossing tpm / rpm limits
2024-07-26 17:35:08 -07:00
Krrish Dholakia
07d90f6739
feat(aporio_ai.py): support aporio ai prompt injection for chat completion requests
...
Closes https://github.com/BerriAI/litellm/issues/2950
2024-07-17 16:38:47 -07:00
Krrish Dholakia
fde434be66
feat(proxy_server.py): return 'retry-after' param for rate limited requests
...
Closes https://github.com/BerriAI/litellm/issues/4695
2024-07-13 17:15:20 -07:00
Krrish Dholakia
0cc273d77b
feat(pass_through_endpoint.py): support enforcing key rpm limits on pass through endpoints
...
Closes https://github.com/BerriAI/litellm/issues/4698
2024-07-13 13:29:44 -07:00
Krrish Dholakia
76c9b715f2
fix(parallel_request_limiter.py): use redis cache, if available for rate limiting across instances
...
Fixes https://github.com/BerriAI/litellm/issues/4148
2024-06-12 10:35:48 -07:00
Krrish Dholakia
4408b717f0
fix(parallel_request_limiter.py): fix user+team tpm/rpm limit check
...
Closes https://github.com/BerriAI/litellm/issues/3788
2024-05-27 08:48:23 -07:00
Ishaan Jaff
106910cecf
feat - add end user rate limiting
2024-05-22 14:01:57 -07:00
Krrish Dholakia
594ca947c8
fix(parallel_request_limiter.py): fix max parallel request limiter on retries
2024-05-15 20:16:11 -07:00
Krrish Dholakia
5a117490ec
fix(proxy_server.py): fix tpm/rpm limiting for jwt auth
...
fixes tpm/rpm limiting for jwt auth and implements unit tests for jwt auth
2024-03-28 21:19:34 -07:00
Krrish Dholakia
7876aa2d75
fix(parallel_request_limiter.py): handle metadata being none
2024-03-14 10:02:41 -07:00
Krrish Dholakia
ad55f4dbb5
feat(proxy_server.py): retry if virtual key is rate limited
...
currently for chat completions
2024-03-05 19:00:03 -08:00
Krrish Dholakia
b3574f2b37
fix(parallel_request_limiter.py): handle none scenario
2024-02-26 20:09:06 -08:00
Krrish Dholakia
f86ab19067
fix(parallel_request_limiter.py): fix team rate limit enforcement
2024-02-26 18:06:13 -08:00
Krrish Dholakia
f84ac35000
feat(parallel_request_limiter.py): enforce team based tpm / rpm limits
2024-02-26 16:20:41 -08:00
ishaan-jaff
a13243652f
(fix) failing parallel_Request_limiter test
2024-02-22 19:16:22 -08:00
ishaan-jaff
1fff8f8105
(fix) don't double check curr data and time
2024-02-22 18:50:02 -08:00
ishaan-jaff
b5900099af
(feat) tpm/rpm limit by User
2024-02-22 18:44:03 -08:00
Krrish Dholakia
b9393fb769
fix(test_parallel_request_limiter.py): use mock responses for streaming
2024-02-08 21:45:38 -08:00
ishaan-jaff
13fe72d6d5
(fix) parallel_request_limiter debug
2024-02-06 12:43:28 -08:00
Krrish Dholakia
92058cbcd4
fix(utils.py): override default success callbacks with dynamic callbacks if set
2024-02-02 06:21:43 -08:00
Krrish Dholakia
bbe71c8375
fix(test_parallel_request_limiter): increase time limit for waiting for success logging event to happen
2024-01-30 13:26:17 -08:00
Krrish Dholakia
f05aba1f85
fix(utils.py): add metadata to logging obj on setup, if exists
2024-01-19 17:29:47 -08:00
Krrish Dholakia
1a29272b47
fix(parallel_request_limiter.py): handle tpm/rpm limits being null
2024-01-19 10:22:27 -08:00
Krrish Dholakia
5dac2402ef
test(test_parallel_request_limiter.py): unit testing for tpm/rpm rate limits
2024-01-18 15:28:28 -08:00
Krrish Dholakia
aef59c554f
feat(parallel_request_limiter.py): add support for tpm/rpm limits
2024-01-18 13:52:15 -08:00
Krrish Dholakia
1ea3833ef7
fix(parallel_request_limiter.py): decrement count for failed llm calls
...
https://github.com/BerriAI/litellm/issues/1477
2024-01-18 12:42:14 -08:00
Krrish Dholakia
4905929de3
refactor: add black formatting
2023-12-25 14:11:20 +05:30
Krrish Dholakia
9f79f75635
fix(proxy/utils.py): return different exceptions if key is invalid vs. expired
...
https://github.com/BerriAI/litellm/issues/1230
2023-12-25 10:29:44 +05:30
Krrish Dholakia
402b2e5733
build(test_streaming.py): fix linting issues
2023-12-25 07:34:54 +05:30
Krrish Dholakia
4791dda66f
feat(proxy_server.py): enable infinite retries on rate limited requests
2023-12-15 20:03:41 -08:00
Krrish Dholakia
effdddc1c8
fix(custom_logger.py): enable pre_call hooks to modify incoming data to proxy
2023-12-13 16:20:37 -08:00
Krrish Dholakia
6ef0e8485e
fix(proxy_server.py): support for streaming
2023-12-09 16:23:04 -08:00
Krrish Dholakia
5fa2b6e5ad
fix(proxy_server.py): enable pre+post-call hooks and max parallel request limits
2023-12-08 17:11:30 -08:00