Commit graph

142 commits

Author SHA1 Message Date
Krrish Dholakia
8e8c4e214e fix: fix linting issue 2024-03-21 08:19:09 -07:00
Krrish Dholakia
d91f9a9f50 feat(proxy_server.py): enable llm api based prompt injection checks
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th
e actual llm call using `async_moderation_hook`
2024-03-20 22:43:42 -07:00
Krrish Dholakia
f24d3ffdb6 fix(proxy_server.py): fix import 2024-03-20 19:15:06 -07:00
Krrish Dholakia
3bb0e24cb7 fix(prompt_injection_detection.py): ensure combinations are actual phrases, not just 1-2 words
reduces misflagging

https://github.com/BerriAI/litellm/issues/2601
2024-03-20 19:09:38 -07:00
Krrish Dholakia
8a20ea795b feat(batch_redis_get.py): batch redis GET requests for a given key + call type
reduces number of redis requests. 85ms latency improvement over 3 minutes of load (19k requests).
2024-03-15 14:54:16 -07:00
Krrish Dholakia
226953e1d8 feat(batch_redis_get.py): batch redis GET requests for a given key + call type
reduces the number of GET requests we're making in high-throughput scenarios
2024-03-15 14:40:11 -07:00
Krrish Dholakia
7876aa2d75 fix(parallel_request_limiter.py): handle metadata being none 2024-03-14 10:02:41 -07:00
Krrish Dholakia
ad55f4dbb5 feat(proxy_server.py): retry if virtual key is rate limited
currently for chat completions
2024-03-05 19:00:03 -08:00
Krrish Dholakia
b3574f2b37 fix(parallel_request_limiter.py): handle none scenario 2024-02-26 20:09:06 -08:00
Krrish Dholakia
f86ab19067 fix(parallel_request_limiter.py): fix team rate limit enforcement 2024-02-26 18:06:13 -08:00
Krrish Dholakia
f84ac35000 feat(parallel_request_limiter.py): enforce team based tpm / rpm limits 2024-02-26 16:20:41 -08:00
ishaan-jaff
a13243652f (fix) failing parallel_Request_limiter test 2024-02-22 19:16:22 -08:00
ishaan-jaff
1fff8f8105 (fix) don't double check curr data and time 2024-02-22 18:50:02 -08:00
ishaan-jaff
b5900099af (feat) tpm/rpm limit by User 2024-02-22 18:44:03 -08:00
Krrish Dholakia
d706d3b672 fix(presidio_pii_masking.py): enable user to pass ad hoc recognizer for pii masking 2024-02-20 16:01:15 -08:00
Krrish Dholakia
72bcd5a4af fix(presidio_pii_masking.py): enable user to pass their own ad hoc recognizers to presidio 2024-02-20 15:19:31 -08:00
Krrish Dholakia
9fa4dfbdd3 test(test_presidio_pii_masking.py): add more unit tests 2024-02-19 16:30:44 -08:00
Krrish Dholakia
448537e684 feat(presidio_pii_masking.py): allow request level controls for turning on/off pii masking
https://github.com/BerriAI/litellm/issues/2003
2024-02-17 11:04:56 -08:00
Krrish Dholakia
3565f74338 docs(enterprise.md): add llama guard tutorial to enterprise docs 2024-02-17 09:25:49 -08:00
Krrish Dholakia
cd8d35107b fix: check key permissions for turning on/off pii masking 2024-02-15 20:16:15 -08:00
Krrish Dholakia
6b91f48c64 fix(presidio_pii_masking.py): fix conditional check 2024-02-13 22:11:03 -08:00
Krrish Dholakia
f68b656040 feat(presidio_pii_masking.py): enable output parsing for pii masking 2024-02-13 21:36:57 -08:00
Krrish Dholakia
2d845b12ed feat(proxy_server.py): support for pii masking with microsoft presidio 2024-02-10 20:21:12 -08:00
Krrish Dholakia
b9393fb769 fix(test_parallel_request_limiter.py): use mock responses for streaming 2024-02-08 21:45:38 -08:00
ishaan-jaff
13fe72d6d5 (fix) parallel_request_limiter debug 2024-02-06 12:43:28 -08:00
Krrish Dholakia
92058cbcd4 fix(utils.py): override default success callbacks with dynamic callbacks if set 2024-02-02 06:21:43 -08:00
Krrish Dholakia
f9acad87dc feat(proxy_server.py): enable cache controls per key + no-store cache flag 2024-01-30 20:46:50 -08:00
Krrish Dholakia
bbe71c8375 fix(test_parallel_request_limiter): increase time limit for waiting for success logging event to happen 2024-01-30 13:26:17 -08:00
Krrish Dholakia
f05aba1f85 fix(utils.py): add metadata to logging obj on setup, if exists 2024-01-19 17:29:47 -08:00
Krrish Dholakia
1a29272b47 fix(parallel_request_limiter.py): handle tpm/rpm limits being null 2024-01-19 10:22:27 -08:00
Krrish Dholakia
5dac2402ef test(test_parallel_request_limiter.py): unit testing for tpm/rpm rate limits 2024-01-18 15:28:28 -08:00
Krrish Dholakia
aef59c554f feat(parallel_request_limiter.py): add support for tpm/rpm limits 2024-01-18 13:52:15 -08:00
Krrish Dholakia
1ea3833ef7 fix(parallel_request_limiter.py): decrement count for failed llm calls
https://github.com/BerriAI/litellm/issues/1477
2024-01-18 12:42:14 -08:00
Krrish Dholakia
4905929de3 refactor: add black formatting 2023-12-25 14:11:20 +05:30
Krrish Dholakia
9f79f75635 fix(proxy/utils.py): return different exceptions if key is invalid vs. expired
https://github.com/BerriAI/litellm/issues/1230
2023-12-25 10:29:44 +05:30
Krrish Dholakia
402b2e5733 build(test_streaming.py): fix linting issues 2023-12-25 07:34:54 +05:30
Krrish Dholakia
89ee9fe400 fix(proxy_server.py): manage budget at user-level not key-level
https://github.com/BerriAI/litellm/issues/1220
2023-12-22 15:10:38 +05:30
Krrish Dholakia
1a32228da5 feat(proxy_server.py): support max budget on proxy 2023-12-21 16:07:20 +05:30
Krrish Dholakia
4791dda66f feat(proxy_server.py): enable infinite retries on rate limited requests 2023-12-15 20:03:41 -08:00
Krrish Dholakia
effdddc1c8 fix(custom_logger.py): enable pre_call hooks to modify incoming data to proxy 2023-12-13 16:20:37 -08:00
Krrish Dholakia
6ef0e8485e fix(proxy_server.py): support for streaming 2023-12-09 16:23:04 -08:00
Krrish Dholakia
5fa2b6e5ad fix(proxy_server.py): enable pre+post-call hooks and max parallel request limits 2023-12-08 17:11:30 -08:00