litellm/litellm
2024-01-02 12:10:34 +05:30
..
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations (fix) proxy - remove errant print statement 2024-01-01 10:48:12 +05:30
llms (fix) init_bedrock_client 2024-01-01 22:48:56 +05:30
proxy (feat) proxy - use user_config for /chat/compeltions 2024-01-02 12:10:34 +05:30
router_strategy feat(router.py): add support for retry/fallbacks for async embedding calls 2024-01-02 11:54:28 +05:30
tests (test) proxy - use, user provided model_list 2024-01-02 12:10:34 +05:30
__init__.py update azure turbo namings 2024-01-01 13:03:08 +03:00
_logging.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_redis.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
caching.py (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
main.py (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
model_prices_and_context_window_backup.json (fix) update back model prices with latest llms 2023-12-11 10:56:01 -08:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py feat(router.py): add support for retry/fallbacks for async embedding calls 2024-01-02 11:54:28 +05:30
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py fix(router.py): correctly raise no model available error 2024-01-01 21:22:42 +05:30