litellm-mirror/tests
Ishaan Jaff c1a642ce20
[UI] Allow setting prompt cache_control_injection_points (#10000)
* test_anthropic_cache_control_hook_system_message

* test_anthropic_cache_control_hook.py

* should_run_prompt_management_hooks

* fix should_run_prompt_management_hooks

* test_anthropic_cache_control_hook_specific_index

* fix test

* fix linting errors

* ChatCompletionCachedContent

* initial commit for cache control

* fixes ui design

* fix inserting cache_control_injection_points

* fix entering cache control points

* fixes for using cache control on ui + backend

* update cache control settings on edit model page

* fix init custom logger compatible class

* fix linting errors

* fix linting errors

* fix get_chat_completion_prompt
2025-04-14 21:17:42 -07:00
..
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
batches_tests VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
code_coverage_tests fix(cost_calculator.py): handle custom pricing at deployment level fo… (#9855) 2025-04-09 22:13:10 -07:00
documentation_tests Litellm dev 12 28 2024 p1 (#7463) 2024-12-28 20:26:00 -08:00
image_gen_tests get_base_image_generation_call_args 2025-04-02 21:04:06 -07:00
litellm [UI] Allow setting prompt cache_control_injection_points (#10000) 2025-04-14 21:17:42 -07:00
litellm_utils_tests [Feat] Add litellm.supports_reasoning() util to track if an llm supports reasoning (#9923) 2025-04-11 17:56:04 -07:00
llm_responses_api_testing test_openai_o1_pro_response_api_streaming 2025-03-20 13:04:49 -07:00
llm_translation fix(transformation.py): correctly translate 'thinking' param for lite… (#9904) 2025-04-11 23:25:13 -07:00
load_tests fix vertex embedding perf test 2025-03-26 10:25:50 -07:00
local_testing Updated cohere v2 passthrough (#9997) 2025-04-14 19:51:01 -07:00
logging_callback_tests [UI] Allow setting prompt cache_control_injection_points (#10000) 2025-04-14 21:17:42 -07:00
mcp_tests test fixes 2025-03-29 18:34:58 -07:00
multi_instance_e2e_tests (e2e testing) - add tests for using litellm /team/ updates in multi-instance deployments with Redis (#8440) 2025-02-10 19:33:27 -08:00
old_proxy_tests/tests vertex testing use pathrise-convert-1606954137718 2025-01-05 14:00:17 -08:00
openai_endpoints_tests test_bad_request_bad_param_error 2025-03-13 16:02:21 -07:00
otel_tests [Team Member permissions] - Fixes (#9945) 2025-04-12 11:17:51 -07:00
pass_through_tests [Bug Fix] Add support for UploadFile on LLM Pass through endpoints (OpenAI, Azure etc) (#9853) 2025-04-09 15:29:20 -07:00
pass_through_unit_tests use new anthropic interface 2025-03-31 14:31:09 -07:00
proxy_admin_ui_tests [Team Member permissions] - Fixes (#9945) 2025-04-12 11:17:51 -07:00
proxy_security_tests (Security fix) - remove code block that inserts master key hash into DB (#8268) 2025-02-05 17:25:42 -08:00
proxy_unit_tests [Feat] Emit Key, Team Budget metrics on a cron job schedule (#9528) 2025-04-10 16:59:14 -07:00
router_unit_tests fix(router.py): support reusable credentials via passthrough router (#9758) 2025-04-04 18:40:14 -07:00
spend_tracking_tests test_long_term_spend_accuracy_with_bursts 2025-03-31 21:09:29 -07:00
store_model_in_db_tests test_chat_completion_bad_model_with_spend_logs 2025-02-28 20:19:43 -08:00
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684) 2025-02-20 21:23:54 -08:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py fix testing - langfuse apis are flaky, we unit test team / key based logging in test_langfuse_unit_tests.py 2024-12-03 11:24:36 -08:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py test: run test earlier to catch error 2025-03-27 23:08:52 -07:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py test: fix test 2025-03-10 22:00:50 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py fix(model_management_endpoints.py): fix allowing team admins to update team models (#9697) 2025-04-01 22:28:15 -07:00
test_openai_endpoints.py test string checked for model access control 2025-03-10 20:04:18 -07:00
test_organizations.py Add remaining org CRUD endpoints + support deleting orgs on UI (#8561) 2025-02-15 15:48:06 -08:00
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py (Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112) 2025-01-30 19:35:50 -08:00
test_spend_logs.py (feat) - track org_id in SpendLogs (#8253) 2025-02-04 21:08:05 -08:00
test_team.py fix(team_endpoints.py): ensure 404 raised when team not found (#9038) 2025-03-06 22:04:36 -08:00
test_team_logging.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_team_members.py test: add more unit testing for team member endpoints (#8170) 2025-02-01 11:23:00 -08:00
test_users.py Internal User Endpoint - vulnerability fix + response type fix (#8228) 2025-02-04 06:41:14 -08:00

In total litellm runs 1000+ tests

[02/20/2025] Update:

To make it easier to contribute and map what behavior is tested,

we've started mapping the litellm directory in tests/litellm

This folder can only run mock tests.