mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
* fix parallel request limiter - use one cache update call * ci/cd run again * run ci/cd again * use docker username password * fix config.yml * fix config * fix config * fix config.yml * ci/cd run again * use correct typing for batch set cache * fix async_set_cache_pipeline * fix only check user id tpm / rpm limits when limits set * fix test_openai_azure_embedding_with_oidc_and_cf |
||
---|---|---|
.. | ||
__init__.py | ||
azure_content_safety.py | ||
batch_redis_get.py | ||
cache_control_check.py | ||
dynamic_rate_limiter.py | ||
example_presidio_ad_hoc_recognizer.json | ||
max_budget_limiter.py | ||
parallel_request_limiter.py | ||
presidio_pii_masking.py | ||
prompt_injection_detection.py |