litellm/litellm
2023-12-30 10:55:42 +05:30
..
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations refactor: add black formatting 2023-12-25 14:11:20 +05:30
llms (fix) vertex ai - use usage from response 2023-12-29 16:30:25 +05:30
proxy feat(admin_ui.py): support creating keys on admin ui 2023-12-28 16:59:11 +05:30
router_strategy fix(router.py): handle initial scenario for tpm/rpm routing 2023-12-30 07:28:45 +05:30
tests fix(router.py): handle initial scenario for tpm/rpm routing 2023-12-30 07:28:45 +05:30
__init__.py (fix) use openai token counter for azure llms 2023-12-29 15:37:46 +05:30
_logging.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_redis.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
caching.py fix(caching.py): hash the cache key to prevent key too long errors 2023-12-29 15:03:33 +05:30
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
main.py (feat) proxy - support dynamic timeout per request 2023-12-30 10:55:42 +05:30
model_prices_and_context_window_backup.json (fix) update back model prices with latest llms 2023-12-11 10:56:01 -08:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py (feat) router, add ModelResponse type hints 2023-12-30 10:44:13 +05:30
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py (feat) proxy - support dynamic timeout per request 2023-12-30 10:55:42 +05:30