litellm-mirror/litellm/proxy/hooks
Ishaan Jaff f4613a100d [Perf Proxy] parallel request limiter - use one cache update call (#5932)
* fix parallel request limiter - use one cache update call

* ci/cd run again

* run ci/cd again

* use docker username password

* fix config.yml

* fix config

* fix config

* fix config.yml

* ci/cd run again

* use correct typing for batch set cache

* fix async_set_cache_pipeline

* fix only check user id tpm / rpm limits when limits set

* fix test_openai_azure_embedding_with_oidc_and_cf
2024-09-27 17:24:46 -07:00
..
__init__.py fix(proxy_server.py): enable pre+post-call hooks and max parallel request limits 2023-12-08 17:11:30 -08:00
azure_content_safety.py feat - allow accessing data post success call 2024-08-19 11:35:33 -07:00
batch_redis_get.py refactor: replace 'traceback.print_exc()' with logging library 2024-06-06 13:47:43 -07:00
cache_control_check.py refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
dynamic_rate_limiter.py v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
example_presidio_ad_hoc_recognizer.json fix(presidio_pii_masking.py): enable user to pass ad hoc recognizer for pii masking 2024-02-20 16:01:15 -08:00
max_budget_limiter.py refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
parallel_request_limiter.py [Perf Proxy] parallel request limiter - use one cache update call (#5932) 2024-09-27 17:24:46 -07:00
presidio_pii_masking.py feat - allow accessing data post success call 2024-08-19 11:35:33 -07:00
prompt_injection_detection.py LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731) 2024-09-17 08:05:52 -07:00