litellm-mirror/litellm
2024-07-26 08:59:53 -07:00
..
adapters feat(proxy_server.py): working /v1/messages endpoint 2024-07-10 18:15:38 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches fix(batches/main.py): fix linting error 2024-07-19 18:26:13 -07:00
deprecated_litellm_server
files fix(files/main.py): fix linting error 2024-07-19 15:50:25 -07:00
integrations fix logfire - don't load_dotenv 2024-07-25 19:22:26 -07:00
litellm_core_utils fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking 2024-07-25 22:12:07 -07:00
llms fix(vertex_ai_llama3.py): Fix llama3 streaming issue 2024-07-25 22:30:55 -07:00
proxy docs(config.md): update wildcard docs 2024-07-26 08:59:53 -07:00
router_strategy control using enable_tag_filtering 2024-07-18 22:40:51 -07:00
router_utils Revert "[Ui] add together AI, Mistral, PerplexityAI, OpenRouter models on Admin UI " 2024-07-20 19:04:22 -07:00
tests feat(proxy_server.py): handle pydantic mockselvar error 2024-07-26 08:38:51 -07:00
types Merge branch 'main' into litellm_proxy_support_all_providers 2024-07-25 20:15:37 -07:00
__init__.py feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix(_service_logging.py): only trigger otel if in service_callback 2024-07-03 09:48:38 -07:00
_version.py
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py add doc string to explain what delete cache does 2024-07-13 12:25:31 -07:00
cost.json
cost_calculator.py fix(litellm_logging.py): log response_cost=0 for failed calls 2024-07-15 19:25:56 -07:00
exceptions.py feat use UnsupportedParamsError as litellm error type 2024-07-24 12:19:10 -07:00
main.py fix(custom_llm.py): pass input params to custom llm 2024-07-25 19:03:52 -07:00
model_prices_and_context_window_backup.json Merge branch 'main' into bedrock-llama3.1-405b 2024-07-25 19:29:10 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt
router.py docs(config.md): update wildcard docs 2024-07-26 08:59:53 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py
utils.py fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking 2024-07-25 22:12:07 -07:00