litellm-mirror/litellm
Krish Dholakia 03fa654b97 Litellm dev 12 31 2024 p1 (#7488)
* fix(internal_user_endpoints.py): fix team list sort - handle team_alias being set + None

* fix(key_management_endpoints.py): allow team admin to create key for member via admin ui

Fixes https://github.com/BerriAI/litellm/issues/7482

* fix(proxy_server.py): allow querying info on specific model group via `/model_group/info`

allows client-side user to get model info from proxy

* fix(proxy_server.py): add docstring on `/model_group/info` showing how to filter by model name

* test(test_proxy_utils.py): add unit test for returning model group info filtered

* fix(proxy_server.py): fix query param

* fix(test_Get_model_info.py): handle no whitelisted bedrock modells
2024-12-31 23:21:51 -08:00
..
adapters (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
assistants (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint (#7406) 2024-12-24 20:23:50 -08:00
caching (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
files (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
fine_tuning (Feat) - new endpoint GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path} (#7427) 2024-12-27 17:01:14 -08:00
integrations Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
litellm_core_utils Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
llms (fix) v1/fine_tuning/jobs with VertexAI (#7487) 2024-12-31 15:09:56 -08:00
proxy Litellm dev 12 31 2024 p1 (#7488) 2024-12-31 23:21:51 -08:00
realtime_api (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
rerank_api (feat) /batches - track user_api_key_alias, user_api_key_team_alias etc for /batch requests (#7401) 2024-12-24 17:44:28 -08:00
router_strategy Litellm dev 12 26 2024 p4 (#7439) 2024-12-27 12:01:42 -08:00
router_utils Litellm dev 12 28 2024 p2 (#7458) 2024-12-28 19:38:06 -08:00
secret_managers (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
types Litellm dev 12 31 2024 p1 (#7488) 2024-12-31 23:21:51 -08:00
__init__.py HumanLoop integration for Prompt Management (#7479) 2024-12-30 22:26:03 -08:00
_logging.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
_redis.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
_service_logger.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
constants.py HumanLoop integration for Prompt Management (#7479) 2024-12-30 22:26:03 -08:00
cost.json
cost_calculator.py LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
exceptions.py LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 (#7448) 2024-12-27 19:04:39 -08:00
main.py HumanLoop integration for Prompt Management (#7479) 2024-12-30 22:26:03 -08:00
model_prices_and_context_window_backup.json ui new build 2024-12-28 18:14:36 -08:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
router.py Litellm dev 12 30 2024 p1 (#7480) 2024-12-30 21:52:52 -08:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00