litellm-mirror/litellm
2024-06-03 23:32:19 -07:00
..
assistants fix(main.py): cast to string only if var is not None 2024-06-03 19:25:59 -07:00
batches docs(customers.md): add customer cost tracking to docs 2024-05-29 14:55:33 -07:00
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations fix(langfuse.py): log litellm response cost as part of langfuse metadata 2024-06-03 12:58:30 -07:00
llms Merge pull request #3996 from BerriAI/litellm_azure_assistants_api_support 2024-06-03 21:05:03 -07:00
proxy ui - new build 2024-06-03 21:15:00 -07:00
router_strategy fix(lowest_latency.py): set default none value for time_to_first_token in sync log success event 2024-05-21 18:42:15 -07:00
tests test(test_image_generation.py): fix azure dall e test 2024-06-03 23:32:19 -07:00
types Merge pull request #3996 from BerriAI/litellm_azure_assistants_api_support 2024-06-03 21:05:03 -07:00
__init__.py Merge pull request #3996 from BerriAI/litellm_azure_assistants_api_support 2024-06-03 21:05:03 -07:00
_logging.py fix(proxy_cli.py): enable json logging via litellm_settings param on config 2024-05-29 21:41:20 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
_version.py (fix) ci/cd don't let importing litellm._version block starting proxy 2024-02-01 16:23:16 -08:00
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py fix(proxy_server.py): allow user_api_key_cache_ttl to be a controllable param 2024-05-25 12:07:28 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py feat - add num retries and max retries in exception 2024-06-01 16:53:00 -07:00
main.py fix(main.py): fix ahealth_check to infer mode when custom_llm_provider/model_name used 2024-06-03 14:06:36 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): add azure gpt-4o pricing 2024-05-31 15:44:19 -07:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py Merge pull request #3992 from BerriAI/litellm_router_default_request_timeout 2024-06-03 21:37:38 -07:00
scheduler.py fix(test_scheduler.py): simplify scheduler testing. fix race condition 2024-06-01 18:57:47 -07:00
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py fix(router.py): use litellm.request_timeout as default for router clients 2024-06-03 14:19:53 -07:00