litellm/litellm
2024-01-01 11:54:16 +05:30
..
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations (fix) proxy - remove errant print statement 2024-01-01 10:48:12 +05:30
llms fix(aimage_generation): fix response type 2023-12-30 12:53:24 +05:30
proxy (test) proxy - log metadata to langfuse 2024-01-01 11:54:16 +05:30
router_strategy test(test_lowest_latency_routing.py): add more tests 2023-12-30 17:41:42 +05:30
tests (test) langfuse - set custom trace_id 2023-12-30 20:19:22 +05:30
__init__.py (fix) use openai token counter for azure llms 2023-12-29 15:37:46 +05:30
_logging.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_redis.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
caching.py (docs) add litellm.cache docstring 2023-12-30 20:04:08 +05:30
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
main.py (feat) cache context manager - update cache 2023-12-30 19:50:53 +05:30
model_prices_and_context_window_backup.json (fix) update back model prices with latest llms 2023-12-11 10:56:01 -08:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py test(test_lowest_latency_routing.py): add more tests 2023-12-30 17:41:42 +05:30
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py (fix) use cloudflare optional params 2023-12-30 12:22:31 +05:30