litellm-mirror/litellm
Krish Dholakia db82b3bb2a feat(router.py): support request prioritization for text completion c… (#7540)
* feat(router.py): support request prioritization for text completion calls

* fix(internal_user_endpoints.py): fix sql query to return all keys, including null team id keys on `/user/info`

Fixes https://github.com/BerriAI/litellm/issues/7485

* fix: fix linting errors

* fix: fix linting error

* test(test_router_helper_utils.py): add direct test for '_schedule_factory'

Fixes code qa test
2025-01-03 19:35:44 -08:00
..
adapters (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
assistants Revert "fix: add missing parameters order, limit, before, and after in get_as…" (#7542) 2025-01-03 16:32:12 -08:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint (#7406) 2024-12-24 20:23:50 -08:00
caching (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
files (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
fine_tuning (feat) POST /fine_tuning/jobs support passing vertex specific hyper params (#7490) 2025-01-01 07:44:48 -08:00
integrations [Feature]: - allow print alert log to console (#7534) 2025-01-03 17:48:13 -08:00
litellm_core_utils Support checking provider-specific /models endpoints for available models based on key (#7538) 2025-01-03 19:29:59 -08:00
llms feat(router.py): support request prioritization for text completion c… (#7540) 2025-01-03 19:35:44 -08:00
proxy feat(router.py): support request prioritization for text completion c… (#7540) 2025-01-03 19:35:44 -08:00
realtime_api (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
rerank_api (feat) /batches - track user_api_key_alias, user_api_key_team_alias etc for /batch requests (#7401) 2024-12-24 17:44:28 -08:00
router_strategy Litellm dev 12 26 2024 p4 (#7439) 2024-12-27 12:01:42 -08:00
router_utils (Feat) - LiteLLM Use UsernamePasswordCredential for Azure OpenAI (#7496) 2025-01-01 14:11:27 -08:00
secret_managers fix(aws_secret_manager_V2.py): Error reading secret from AWS Secrets Manager: (#7541) 2025-01-03 18:22:12 -08:00
types [Feature]: - allow print alert log to console (#7534) 2025-01-03 17:48:13 -08:00
__init__.py Support checking provider-specific /models endpoints for available models based on key (#7538) 2025-01-03 19:29:59 -08:00
_logging.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
_redis.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
_service_logger.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
constants.py HumanLoop integration for Prompt Management (#7479) 2024-12-30 22:26:03 -08:00
cost.json
cost_calculator.py LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
exceptions.py LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 (#7448) 2024-12-27 19:04:39 -08:00
main.py Fix langfuse prompt management on proxy (#7535) 2025-01-03 12:42:37 -08:00
model_prices_and_context_window_backup.json test_aiohttp_openai 2025-01-03 15:12:56 -08:00
py.typed
router.py feat(router.py): support request prioritization for text completion c… (#7540) 2025-01-03 19:35:44 -08:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Support checking provider-specific /models endpoints for available models based on key (#7538) 2025-01-03 19:29:59 -08:00