litellm-mirror/litellm
2024-05-28 22:27:09 -07:00
..
assistants feat(assistants/main.py): support litellm.get_assistants() and litellm.get_messages() 2024-05-04 21:30:28 -07:00
batches fix python3.8 error 2024-05-28 17:25:08 -07:00
deprecated_litellm_server
integrations fix - validation for email alerting 2024-05-27 22:38:17 -07:00
llms Merge pull request #3882 from BerriAI/litellm_add_batches_sdk 2024-05-28 19:38:12 -07:00
proxy feat(proxy_server.py): give request-level breakdown if ttft metric is selected for ju 2024-05-28 18:09:22 -07:00
router_strategy fix(lowest_latency.py): set default none value for time_to_first_token in sync log success event 2024-05-21 18:42:15 -07:00
tests feat - router add abatch_completion 2024-05-28 22:19:33 -07:00
types fix python 3.8 error 2024-05-28 17:21:59 -07:00
__init__.py feat - import batches in __init__ 2024-05-28 15:35:11 -07:00
_logging.py fix(_logging.py): support all logs being in json mode, if enabled 2024-05-20 09:22:59 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
_version.py
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py fix(proxy_server.py): allow user_api_key_cache_ttl to be a controllable param 2024-05-25 12:07:28 -07:00
cost.json
exceptions.py feat(proxy_server.py): refactor returning rejected message, to work with error logging 2024-05-20 11:14:36 -07:00
main.py fix(main.py): pass extra headers through for async calls 2024-05-27 19:11:40 -07:00
model_prices_and_context_window_backup.json feat(ui/model_dashboard.tsx): add databricks models via admin ui 2024-05-23 20:28:54 -07:00
requirements.txt
router.py fix - update abatch_completion docstring 2024-05-28 22:27:09 -07:00
timeout.py
utils.py Merge branch 'main' into litellm_show_openai_params_model_hub 2024-05-27 09:27:56 -07:00