litellm-mirror/litellm
2024-01-09 13:00:23 +05:30
..
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations (fix) proxy - remove errant print statement 2024-01-01 10:48:12 +05:30
llms fix(openai.py): fix exception raising logic 2024-01-09 12:59:57 +05:30
proxy fix(proxy_server.py): don't reconnect prisma if already connected 2024-01-09 12:59:57 +05:30
router_strategy refactor(lowest_latency.py): fix linting issue 2024-01-09 12:59:42 +05:30
tests (temp) prisma client init logic 2024-01-09 13:00:23 +05:30
types (types) routerConfig 2024-01-02 14:14:29 +05:30
__init__.py (feat) completion_cost - embeddings + raise Exception 2024-01-05 13:11:23 +05:30
_logging.py (fix) proxy - show detailed_debug logs 2024-01-08 15:34:24 +05:30
_redis.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
caching.py Merge pull request #1311 from Manouchehri/patch-5 2024-01-08 09:47:57 +05:30
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
main.py fix(main.py): support cost calculation for text completion streaming object 2024-01-08 12:41:43 +05:30
model_prices_and_context_window_backup.json (fix) update back model prices with latest llms 2023-12-11 10:56:01 -08:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py fix(router.py): azure client init fix 2024-01-08 14:56:57 +05:30
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py (feat) litellm.completion - support ollama timeout 2024-01-09 12:59:42 +05:30