.. |
adapters
|
LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938)
|
2024-09-27 22:52:57 -07:00 |
assistants
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
batch_completion
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
batches
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
caching
|
Provider Budget Routing - Get Budget, Spend Details (#7063)
|
2024-12-06 21:14:12 -08:00 |
deprecated_litellm_server
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
files
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
fine_tuning
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
integrations
|
Litellm code qa common config (#7113)
|
2024-12-09 15:58:25 -08:00 |
litellm_core_utils
|
refactor(sagemaker/): separate chat + completion routes + make them b… (#7151)
|
2024-12-10 19:40:05 -08:00 |
llms
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
proxy
|
(Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147)
|
2024-12-10 15:49:01 -08:00 |
realtime_api
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
rerank_api
|
LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037)
|
2024-12-05 00:02:31 -08:00 |
router_strategy
|
(Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147)
|
2024-12-10 15:49:01 -08:00 |
router_utils
|
Litellm dev 12 07 2024 (#7086)
|
2024-12-08 00:30:33 -08:00 |
secret_managers
|
(Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874)
|
2024-11-22 18:47:26 -08:00 |
types
|
(Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147)
|
2024-12-10 15:49:01 -08:00 |
__init__.py
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
_logging.py
|
LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519)
|
2024-11-02 00:44:32 +05:30 |
_redis.py
|
(redis fix) - fix AbstractConnection.__init__() got an unexpected keyword argument 'ssl' (#6908)
|
2024-11-25 22:52:44 -08:00 |
_service_logger.py
|
LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037)
|
2024-12-05 00:02:31 -08:00 |
_version.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
budget_manager.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
constants.py
|
refactor(fireworks_ai/): inherit from openai like base config (#7146)
|
2024-12-10 16:15:19 -08:00 |
cost.json
|
|
|
cost_calculator.py
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
exceptions.py
|
Litellm 12 02 2024 (#6994)
|
2024-12-02 22:00:01 -08:00 |
main.py
|
rename llms/OpenAI/ -> llms/openai/ (#7154)
|
2024-12-10 20:14:07 -08:00 |
model_prices_and_context_window_backup.json
|
fix llama-3.3-70b-versatile
|
2024-12-07 20:19:02 -08:00 |
py.typed
|
feature - Types for mypy - #360
|
2024-05-30 14:14:41 -04:00 |
requirements.txt
|
|
|
router.py
|
Litellm dev 12 07 2024 (#7086)
|
2024-12-08 00:30:33 -08:00 |
scheduler.py
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
timeout.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
utils.py
|
refactor(sagemaker/): separate chat + completion routes + make them b… (#7151)
|
2024-12-10 19:40:05 -08:00 |