litellm-mirror/litellm
2023-12-27 15:20:26 +05:30
..
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations refactor: add black formatting 2023-12-25 14:11:20 +05:30
llms fix(azure.py,-openai.py): correctly raise errors if streaming calls fail 2023-12-27 15:08:37 +05:30
proxy fix(google_kms.py): support enums for key management system 2023-12-27 13:19:33 +05:30
router_strategy refactor: add black formatting 2023-12-25 14:11:20 +05:30
tests (test) fix langfuse test 2023-12-27 15:20:26 +05:30
__init__.py fix(google_kms.py): support enums for key management system 2023-12-27 13:19:33 +05:30
_logging.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_redis.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
caching.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
main.py fix(azure.py,-openai.py): correctly raise errors if streaming calls fail 2023-12-27 15:08:37 +05:30
model_prices_and_context_window_backup.json (fix) update back model prices with latest llms 2023-12-11 10:56:01 -08:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py feat(proxy_server.py): support maxage cache control 2023-12-26 17:50:27 +05:30
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py fix(azure.py,-openai.py): correctly raise errors if streaming calls fail 2023-12-27 15:08:37 +05:30