..
adapters
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
assistants
Revert "fix: add missing parameters order, limit, before, and after in get_as…" ( #7542 )
2025-01-03 16:32:12 -08:00
batch_completion
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
batches
(Feat - Batches API) add support for retrieving vertex api batch jobs ( #7661 )
2025-01-09 18:35:03 -08:00
caching
fix 1 - latency fix ( #7655 )
2025-01-09 15:57:05 -08:00
files
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
fine_tuning
(feat) POST /fine_tuning/jobs
support passing vertex specific hyper params ( #7490 )
2025-01-01 07:44:48 -08:00
integrations
Litellm dev 01 07 2025 p3 ( #7635 )
2025-01-08 11:46:24 -08:00
litellm_core_utils
fix(main.py): fix lm_studio/ embedding routing ( #7658 )
2025-01-09 23:03:24 -08:00
llms
(Feat - Batches API) add support for retrieving vertex api batch jobs ( #7661 )
2025-01-09 18:35:03 -08:00
proxy
fix(main.py): fix lm_studio/ embedding routing ( #7658 )
2025-01-09 23:03:24 -08:00
realtime_api
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks ( #7455 )
2024-12-28 18:38:54 -08:00
rerank_api
(feat) /batches
- track user_api_key_alias
, user_api_key_team_alias
etc for /batch requests ( #7401 )
2024-12-24 17:44:28 -08:00
router_strategy
Litellm dev 12 26 2024 p4 ( #7439 )
2024-12-27 12:01:42 -08:00
router_utils
(Feat) - LiteLLM Use UsernamePasswordCredential
for Azure OpenAI ( #7496 )
2025-01-01 14:11:27 -08:00
secret_managers
(Feat) Hashicorp Secret Manager - Allow storing virtual keys in secret manager ( #7549 )
2025-01-04 11:35:59 -08:00
types
LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 ( #7643 )
2025-01-08 19:45:19 -08:00
__init__.py
ci/cd run again
2025-01-08 18:36:39 -08:00
_logging.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
_redis.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
_service_logger.py
LiteLLM Minor Fixes & Improvements (12/05/2024) ( #7037 )
2024-12-05 00:02:31 -08:00
_version.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
budget_manager.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
constants.py
HumanLoop integration for Prompt Management ( #7479 )
2024-12-30 22:26:03 -08:00
cost.json
cost_calculator.py
Allow assigning teams to org on UI + OpenAI omni-moderation
cost model tracking ( #7566 )
2025-01-08 16:58:21 -08:00
exceptions.py
LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 ( #7448 )
2024-12-27 19:04:39 -08:00
main.py
fix(main.py): fix lm_studio/ embedding routing ( #7658 )
2025-01-09 23:03:24 -08:00
model_prices_and_context_window_backup.json
build(model_prices_and_context_window.json): omni-moderation-latest-intents
2025-01-08 19:06:04 -08:00
py.typed
feature - Types for mypy - #360
2024-05-30 14:14:41 -04:00
router.py
fix(utils.py): fix select tokenizer for custom tokenizer ( #7599 )
2025-01-07 22:37:09 -08:00
scheduler.py
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
2024-10-14 16:34:01 +05:30
timeout.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
utils.py
fix(utils.py): fix select tokenizer for custom tokenizer ( #7599 )
2025-01-07 22:37:09 -08:00