litellm-mirror/litellm
2024-05-23 13:08:06 -07:00
..
assistants feat(assistants/main.py): support litellm.get_assistants() and litellm.get_messages() 2024-05-04 21:30:28 -07:00
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations fix(slack_alerting.py): fix time check + add more debug values 2024-05-22 20:11:36 -07:00
llms feat(anthropic.py): support anthropic 'tool_choice' param 2024-05-21 17:50:44 -07:00
proxy feat - add open ai moderations check 2024-05-23 13:08:06 -07:00
router_strategy fix(lowest_latency.py): set default none value for time_to_first_token in sync log success event 2024-05-21 18:42:15 -07:00
tests test(test_key_generate_prisma.py): fix tests with unique team id 2024-05-23 08:46:44 -07:00
types feat(anthropic.py): support anthropic 'tool_choice' param 2024-05-21 17:50:44 -07:00
__init__.py feat - add open ai moderations check 2024-05-23 13:08:06 -07:00
_logging.py fix(_logging.py): support all logs being in json mode, if enabled 2024-05-20 09:22:59 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
_version.py (fix) ci/cd don't let importing litellm._version block starting proxy 2024-02-01 16:23:16 -08:00
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py Merge pull request #3266 from antonioloison/litellm_add_disk_cache 2024-05-14 09:24:01 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py feat(proxy_server.py): refactor returning rejected message, to work with error logging 2024-05-20 11:14:36 -07:00
main.py feat(anthropic.py): support anthropic 'tool_choice' param 2024-05-21 17:50:44 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): update azure/gpt-3.5-turbo base model pricing 2024-05-21 10:58:16 -07:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py feat(router.py): Fixes https://github.com/BerriAI/litellm/issues/3769 2024-05-21 17:24:51 -07:00
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py Merge branch 'main' into litellm_filter_invalid_params 2024-05-21 20:42:21 -07:00