litellm-mirror/litellm
2025-03-20 09:55:59 -07:00
..
assistants refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches refactor(batches/main.py): working refactored azure client init on batches 2025-03-11 14:36:38 -07:00
caching Merge pull request #9330 from BerriAI/litellm_dev_03_17_2025_p1 2025-03-17 19:57:25 -07:00
files refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
fine_tuning fix linting 2025-02-14 21:42:51 -08:00
integrations define CustomPromptManagement 2025-03-19 16:22:23 -07:00
litellm_core_utils get_chat_completion_prompt 2025-03-19 20:50:15 -07:00
llms add fake_stream to llm http handler 2025-03-20 09:55:59 -07:00
proxy Merge pull request #9395 from BerriAI/litellm_ui_fixes_03_19_2025 2025-03-19 22:58:32 -07:00
realtime_api fix(aim.py): fix linting error 2025-03-13 15:32:42 -07:00
rerank_api Add new gpt-4.5-preview model + other updates (#8879) 2025-02-27 15:27:14 -08:00
responses add fake_stream to llm http handler 2025-03-20 09:55:59 -07:00
router_strategy Revert "Fix latency redis" 2025-03-19 18:11:22 -07:00
router_utils test_openai_responses_litellm_router 2025-03-12 16:13:48 -07:00
secret_managers fix if 2025-03-11 09:27:31 +00:00
types Merge branch 'main' into litellm_arize_dynamic_logging 2025-03-18 22:13:35 -07:00
__init__.py Merge branch 'main' of https://github.com/SunnyWan59/litellm 2025-03-13 19:42:25 -04:00
_logging.py (sdk perf fix) - only print args passed to litellm when debugging mode is on (#7708) 2025-01-11 22:56:20 -08:00
_redis.py fix(redis_cache.py): add 5s default timeout 2025-03-17 14:27:36 -07:00
_service_logger.py fix svc logger (#7727) 2025-01-12 22:00:25 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
constants.py fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis 2025-03-18 19:09:53 -07:00
cost.json
cost_calculator.py feat(cost_calculator.py): support reading litellm response cost header in client sdk 2025-03-17 15:12:01 -07:00
exceptions.py feat(openai.py): bubble all error information back to client 2025-03-10 15:27:43 -07:00
main.py fix type errors on transcription azure 2025-03-18 14:22:30 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): fix native streaming flag 2025-03-19 19:53:19 -07:00
py.typed
router.py feat(endpoints.py): support adding credentials by model id 2025-03-14 12:32:32 -07:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py add should_fake_stream 2025-03-20 09:54:26 -07:00