litellm-mirror/litellm
2024-07-30 18:38:10 -07:00
..
adapters feat(proxy_server.py): working /v1/messages endpoint 2024-07-10 18:15:38 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
deprecated_litellm_server
files fix(files/main.py): fix linting error 2024-07-19 15:50:25 -07:00
fine_tuning fix type errors 2024-07-29 20:10:03 -07:00
integrations log output from /audio on langfuse 2024-07-29 08:21:22 -07:00
litellm_core_utils fix(utils.py): fix cost tracking for vertex ai partner models 2024-07-30 14:20:52 -07:00
llms Merge branch 'main' into litellm_async_cohere_calls 2024-07-30 15:35:20 -07:00
proxy fix(utils.py): fix model registeration to model cost map 2024-07-30 18:15:00 -07:00
router_strategy control using enable_tag_filtering 2024-07-18 22:40:51 -07:00
router_utils Revert "[Ui] add together AI, Mistral, PerplexityAI, OpenRouter models on Admin UI " 2024-07-20 19:04:22 -07:00
tests fix(utils.py): fix model registeration to model cost map 2024-07-30 18:15:00 -07:00
types fix(utils.py): fix model registeration to model cost map 2024-07-30 18:15:00 -07:00
__init__.py add create_fine_tuning 2024-07-29 18:57:29 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py use common helpers for writing to otel 2024-07-27 11:40:39 -07:00
_version.py
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py fix(caching.py): support /completion caching by default 2024-07-29 08:19:30 -07:00
cost.json
cost_calculator.py fix(utils.py): fix model registeration to model cost map 2024-07-30 18:15:00 -07:00
exceptions.py fix(utils.py): correctly re-raise azure api connection error 2024-07-29 12:28:25 -07:00
main.py Merge branch 'main' into litellm_async_cohere_calls 2024-07-30 15:35:20 -07:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): update model info for llama3.1 on bedrock - supports tool calling, not tool choice 2024-07-29 15:43:16 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt
router.py fix(utils.py): fix linting errors 2024-07-30 18:38:10 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py
utils.py fix(utils.py): fix linting errors 2024-07-30 18:38:10 -07:00