mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 19:24:27 +00:00
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints * test(test_streaming.py): skip model due to end of life * feat(custom_logger.py): add special callback for model hitting tpm/rpm limits Closes https://github.com/BerriAI/litellm/issues/4096 |
||
---|---|---|
.. | ||
llm_cost_calc | ||
asyncify.py | ||
core_helpers.py | ||
exception_mapping_utils.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_utils.py | ||
redact_messages.py | ||
streaming_utils.py | ||
token_counter.py |