..
adapters
LiteLLM Minor Fixes & Improvements (09/27/2024) ( #5938 )
2024-09-27 22:52:57 -07:00
assistants
rename llms/OpenAI/
-> llms/openai/
( #7154 )
2024-12-10 20:14:07 -08:00
batch_completion
Litellm vllm refactor ( #7158 )
2024-12-10 21:48:35 -08:00
batches
Code Quality Improvement - use vertex_ai/
as folder name for vertexAI ( #7166 )
2024-12-11 00:32:41 -08:00
caching
Provider Budget Routing - Get Budget, Spend Details ( #7063 )
2024-12-06 21:14:12 -08:00
deprecated_litellm_server
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
2024-10-14 16:34:01 +05:30
files
Code Quality Improvement - use vertex_ai/
as folder name for vertexAI ( #7166 )
2024-12-11 00:32:41 -08:00
fine_tuning
Code Quality Improvement - use vertex_ai/
as folder name for vertexAI ( #7166 )
2024-12-11 00:32:41 -08:00
integrations
(Feat) DataDog Logger - Add HOSTNAME
and POD_NAME
to DataDog logs ( #7189 )
2024-12-12 12:06:26 -08:00
litellm_core_utils
Litellm dev 12 13 2024 p1 ( #7219 )
2024-12-13 19:01:28 -08:00
llms
Litellm dev 12 13 2024 p1 ( #7219 )
2024-12-13 19:01:28 -08:00
proxy
(feat - Router / Proxy ) Allow setting budget limits per LLM deployment ( #7220 )
2024-12-13 19:15:51 -08:00
realtime_api
rename llms/OpenAI/
-> llms/openai/
( #7154 )
2024-12-10 20:14:07 -08:00
rerank_api
LiteLLM Minor Fixes & Improvements (12/05/2024) ( #7037 )
2024-12-05 00:02:31 -08:00
router_strategy
(feat - Router / Proxy ) Allow setting budget limits per LLM deployment ( #7220 )
2024-12-13 19:15:51 -08:00
router_utils
Litellm dev 12 11 2024 v2 ( #7215 )
2024-12-13 12:49:57 -08:00
secret_managers
(Refactor) Code Quality improvement - remove /prompt_templates/
, base_aws_llm.py
from /llms
folder ( #7164 )
2024-12-11 00:02:46 -08:00
types
(feat - Router / Proxy ) Allow setting budget limits per LLM deployment ( #7220 )
2024-12-13 19:15:51 -08:00
__init__.py
Litellm dev 12 12 2024 ( #7203 )
2024-12-13 08:54:03 -08:00
_logging.py
LiteLLM Minor Fixes & Improvements (10/30/2024) ( #6519 )
2024-11-02 00:44:32 +05:30
_redis.py
(redis fix) - fix AbstractConnection.__init__() got an unexpected keyword argument 'ssl'
( #6908 )
2024-11-25 22:52:44 -08:00
_service_logger.py
LiteLLM Minor Fixes & Improvements (12/05/2024) ( #7037 )
2024-12-05 00:02:31 -08:00
_version.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
budget_manager.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
constants.py
Litellm dev 12 12 2024 ( #7203 )
2024-12-13 08:54:03 -08:00
cost.json
cost_calculator.py
Code Quality Improvement - use vertex_ai/
as folder name for vertexAI ( #7166 )
2024-12-11 00:32:41 -08:00
exceptions.py
Litellm 12 02 2024 ( #6994 )
2024-12-02 22:00:01 -08:00
main.py
Litellm dev 12 12 2024 ( #7203 )
2024-12-13 08:54:03 -08:00
model_prices_and_context_window_backup.json
build(model_prices_and_context_window.json): add new dbrx llama 3.3 model
2024-12-11 13:01:22 -08:00
py.typed
feature - Types for mypy - #360
2024-05-30 14:14:41 -04:00
router.py
(feat - Router / Proxy ) Allow setting budget limits per LLM deployment ( #7220 )
2024-12-13 19:15:51 -08:00
scheduler.py
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
2024-10-14 16:34:01 +05:30
timeout.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
utils.py
Litellm dev 12 12 2024 ( #7203 )
2024-12-13 08:54:03 -08:00