litellm-mirror/litellm
2024-06-28 15:06:51 -07:00
..
assistants feat(assistants/main.py): support arun_thread_stream 2024-06-04 16:47:51 -07:00
batches docs(customers.md): add customer cost tracking to docs 2024-05-29 14:55:33 -07:00
deprecated_litellm_server
integrations fix(router.py): set cooldown_time: per model 2024-06-27 20:20:46 -07:00
litellm_core_utils fix(router.py): set cooldown_time: per model 2024-06-27 20:20:46 -07:00
llms fix(vertex_httpx.py): only use credential project id, if user project id not given 2024-06-27 22:08:14 -07:00
proxy fix support pass through endpoints 2024-06-28 15:06:51 -07:00
router_strategy refactor: replace 'traceback.print_exc()' with logging library 2024-06-06 13:47:43 -07:00
router_utils fix use safe access for router alerting 2024-06-14 15:17:32 -07:00
tests Merge pull request #4446 from BerriAI/litellm_get_max_modified_tokens 2024-06-27 21:43:23 -07:00
types fix(utils.py): handle arguments being None 2024-06-27 20:20:46 -07:00
__init__.py add initial support for volcengine 2024-06-27 20:20:46 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py feat(dynamic_rate_limiter.py): update cache with active project 2024-06-21 20:25:40 -07:00
_version.py
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py remove debug print statement 2024-06-27 20:58:29 -07:00
cost.json
cost_calculator.py Merge branch 'main' into litellm_response_cost_headers 2024-06-27 21:33:09 -07:00
exceptions.py fix(utils.py): fix exception_mapping check for errors 2024-06-27 20:20:29 -07:00
main.py Merge pull request #4449 from BerriAI/litellm_azure_tts 2024-06-27 21:33:38 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): update gemini-1.5-pro max input tokens 2024-06-27 21:58:54 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt
router.py fix(router.py): set cooldown_time: per model 2024-06-27 20:20:46 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py
utils.py Merge pull request #4446 from BerriAI/litellm_get_max_modified_tokens 2024-06-27 21:43:23 -07:00