.. |
adapters
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
assistants
|
Revert "fix: add missing parameters order, limit, before, and after in get_as…" (#7542)
|
2025-01-03 16:32:12 -08:00 |
batch_completion
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
batches
|
(Feat - Batches API) add support for retrieving vertex api batch jobs (#7661)
|
2025-01-09 18:35:03 -08:00 |
caching
|
fix 1 - latency fix (#7655)
|
2025-01-09 15:57:05 -08:00 |
files
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
fine_tuning
|
(feat) POST /fine_tuning/jobs support passing vertex specific hyper params (#7490)
|
2025-01-01 07:44:48 -08:00 |
integrations
|
feat: allow to pass custom parent run id (#7651)
|
2025-01-10 17:04:46 -08:00 |
litellm_core_utils
|
(performance improvement - litellm sdk + proxy) - ensure litellm does not create unnecessary threads when running async functions (#7680)
|
2025-01-10 17:57:22 -08:00 |
llms
|
fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini p… (#7660)
|
2025-01-10 07:31:59 -08:00 |
proxy
|
[Bug fix]: Proxy Auth Layer - Allow Azure Realtime routes as llm_api_routes (#7684)
|
2025-01-10 20:38:06 -08:00 |
realtime_api
|
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455)
|
2024-12-28 18:38:54 -08:00 |
rerank_api
|
(feat) /batches - track user_api_key_alias , user_api_key_team_alias etc for /batch requests (#7401)
|
2024-12-24 17:44:28 -08:00 |
router_strategy
|
Litellm dev 12 26 2024 p4 (#7439)
|
2024-12-27 12:01:42 -08:00 |
router_utils
|
(Feat) - LiteLLM Use UsernamePasswordCredential for Azure OpenAI (#7496)
|
2025-01-01 14:11:27 -08:00 |
secret_managers
|
(Feat) Hashicorp Secret Manager - Allow storing virtual keys in secret manager (#7549)
|
2025-01-04 11:35:59 -08:00 |
types
|
(litellm sdk - perf improvement) - use O(1) set lookups for checking llm providers / models (#7672)
|
2025-01-10 14:16:30 -08:00 |
__init__.py
|
(litellm sdk - perf improvement) - use O(1) set lookups for checking llm providers / models (#7672)
|
2025-01-10 14:16:30 -08:00 |
_logging.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
_redis.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
_service_logger.py
|
LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037)
|
2024-12-05 00:02:31 -08:00 |
_version.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
budget_manager.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
constants.py
|
HumanLoop integration for Prompt Management (#7479)
|
2024-12-30 22:26:03 -08:00 |
cost.json
|
|
|
cost_calculator.py
|
Allow assigning teams to org on UI + OpenAI omni-moderation cost model tracking (#7566)
|
2025-01-08 16:58:21 -08:00 |
exceptions.py
|
LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 (#7448)
|
2024-12-27 19:04:39 -08:00 |
main.py
|
fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini p… (#7660)
|
2025-01-10 07:31:59 -08:00 |
model_prices_and_context_window_backup.json
|
build(model_prices_and_context_window.json): omni-moderation-latest-intents
|
2025-01-08 19:06:04 -08:00 |
py.typed
|
feature - Types for mypy - #360
|
2024-05-30 14:14:41 -04:00 |
router.py
|
fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini p… (#7660)
|
2025-01-10 07:31:59 -08:00 |
scheduler.py
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
timeout.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
utils.py
|
(performance improvement - litellm sdk + proxy) - ensure litellm does not create unnecessary threads when running async functions (#7680)
|
2025-01-10 17:57:22 -08:00 |