mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 19:24:27 +00:00
* fix(parallel_request_limiter.py): add back parallel request information to max parallel request limiter Resolves https://github.com/BerriAI/litellm/issues/8392 * test: mark flaky test to handle time based tracking issues * feat(model_management_endpoints.py): expose new patch `/model/{model_id}/update` endpoint Allows updating specific values of a model in db - makes it easy for admin to know this by calling it a PA TCH * feat(edit_model_modal.tsx): allow user to update llm provider + api key on the ui * fix: fix linting error |
||
---|---|---|
.. | ||
__init__.py | ||
azure_content_safety.py | ||
batch_redis_get.py | ||
cache_control_check.py | ||
dynamic_rate_limiter.py | ||
example_presidio_ad_hoc_recognizer.json | ||
key_management_event_hooks.py | ||
max_budget_limiter.py | ||
model_max_budget_limiter.py | ||
parallel_request_limiter.py | ||
prompt_injection_detection.py | ||
proxy_failure_handler.py | ||
proxy_track_cost_callback.py |