mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 03:34:10 +00:00
* feat(view_logs.tsx): show model id + api base in request logs easier debugging * fix(index.tsx): fix length of api base easier viewing * refactor(leftnav.tsx): show models tab to team admin * feat(model_dashboard.tsx): add explainer for what the 'models' page is for team admin helps them understand how they can use it * feat(model_management_endpoints.py): restrict model add by team to just team admin allow team admin to add models via non-team keys (e.g. ui token) * test(test_add_update_models.py): update unit testing for new behaviour * fix(model_dashboard.tsx): show user the models * feat(proxy_server.py): add new query param 'user_models_only' to `/v2/model/info` Allows user to retrieve just the models they've added Used in UI to show internal users just the models they've added * feat(model_dashboard.tsx): allow team admins to view their own models * fix: allow ui user to fetch model cost map * feat(add_model_tab.tsx): require team admins to specify team when onboarding models * fix(_types.py): add `/v1/model/info` to info route `/model/info` was already there * fix(model_info_view.tsx): allow user to edit a model they created * fix(model_management_endpoints.py): allow team admin to update team model * feat(model_managament_endpoints.py): allow team admin to delete team models * fix(model_management_endpoints.py): don't require team id to be set when adding a model * fix(proxy_server.py): fix linting error * fix: fix ui linting error * fix(model_management_endpoints.py): ensure consistent auth checks on all model calls * test: remove old test - function no longer exists in same form * test: add updated mock testing |
||
---|---|---|
.. | ||
basic_proxy_startup_tests | ||
batches_tests | ||
code_coverage_tests | ||
documentation_tests | ||
image_gen_tests | ||
litellm | ||
litellm_utils_tests | ||
llm_responses_api_testing | ||
llm_translation | ||
load_tests | ||
local_testing | ||
logging_callback_tests | ||
mcp_tests | ||
multi_instance_e2e_tests | ||
old_proxy_tests/tests | ||
openai_endpoints_tests | ||
otel_tests | ||
pass_through_tests | ||
pass_through_unit_tests | ||
proxy_admin_ui_tests | ||
proxy_security_tests | ||
proxy_unit_tests | ||
router_unit_tests | ||
store_model_in_db_tests | ||
gettysburg.wav | ||
large_text.py | ||
openai_batch_completions.jsonl | ||
README.MD | ||
test_callbacks_on_proxy.py | ||
test_config.py | ||
test_debug_warning.py | ||
test_end_users.py | ||
test_entrypoint.py | ||
test_fallbacks.py | ||
test_health.py | ||
test_keys.py | ||
test_logging.conf | ||
test_models.py | ||
test_openai_endpoints.py | ||
test_organizations.py | ||
test_passthrough_endpoints.py | ||
test_ratelimit.py | ||
test_spend_logs.py | ||
test_team.py | ||
test_team_logging.py | ||
test_team_members.py | ||
test_users.py |
In total litellm runs 1000+ tests
[02/20/2025] Update:
To make it easier to contribute and map what behavior is tested,
we've started mapping the litellm directory in tests/litellm
This folder can only run mock tests.