litellm-mirror/litellm
2024-12-12 20:50:45 -08:00
..
adapters LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
assistants rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
batch_completion Litellm vllm refactor (#7158) 2024-12-10 21:48:35 -08:00
batches Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
caching Provider Budget Routing - Get Budget, Spend Details (#7063) 2024-12-06 21:14:12 -08:00
deprecated_litellm_server (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
files Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
fine_tuning Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
integrations (Feat) DataDog Logger - Add HOSTNAME and POD_NAME to DataDog logs (#7189) 2024-12-12 12:06:26 -08:00
litellm_core_utils fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00
llms fix hf failing streaming test 2024-12-12 10:48:00 -08:00
proxy bump: version 1.55.0 → 1.55.1 2024-12-12 20:50:45 -08:00
realtime_api rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
rerank_api LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
router_strategy (Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147) 2024-12-10 15:49:01 -08:00
router_utils Litellm dev 12 07 2024 (#7086) 2024-12-08 00:30:33 -08:00
secret_managers (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
types (feat) add error_code, error_class, llm_provider to StandardLoggingPayload (#7200) 2024-12-12 12:18:10 -08:00
__init__.py ci/cd run release pipeline 2024-12-12 10:48:47 -08:00
_logging.py LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519) 2024-11-02 00:44:32 +05:30
_redis.py (redis fix) - fix AbstractConnection.__init__() got an unexpected keyword argument 'ssl' (#6908) 2024-11-25 22:52:44 -08:00
_service_logger.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
constants.py (minor fix proxy) Clarify Proxy Rate limit errors are showing hash of litellm virtual key (#7210) 2024-12-12 20:13:14 -08:00
cost.json
cost_calculator.py Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
exceptions.py Litellm 12 02 2024 (#6994) 2024-12-02 22:00:01 -08:00
main.py fix(acompletion): support fallbacks on acompletion (#7184) 2024-12-11 19:20:54 -08:00
model_prices_and_context_window_backup.json build(model_prices_and_context_window.json): add new dbrx llama 3.3 model 2024-12-11 13:01:22 -08:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
router.py (fix) latency fix - revert prompt caching check on litellm router (#7211) 2024-12-12 20:50:16 -08:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00