litellm-mirror/tests
Krish Dholakia f899b828cf
Support openrouter reasoning_content on streaming (#9094)
* feat(convert_dict_to_response.py): support openrouter format of reasoning content

* fix(transformation.py): fix openrouter streaming with reasoning content

Fixes https://github.com/BerriAI/litellm/issues/8193#issuecomment-270892962

* fix: fix type error
2025-03-09 20:03:59 -07:00
..
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
batches_tests cleanup_azure_files 2025-02-15 15:32:42 -08:00
code_coverage_tests Bug fix - String data: stripped from entire content in streamed Gemini responses (#9070) 2025-03-07 21:06:39 -08:00
documentation_tests Litellm dev 12 28 2024 p1 (#7463) 2024-12-28 20:26:00 -08:00
image_gen_tests fix(base_aws_llm.py): remove region name before sending in args (#8998) 2025-03-04 23:05:28 -08:00
litellm support bytes.IO for audio transcription (#9071) 2025-03-08 08:47:15 -08:00
litellm_utils_tests (AWS Secret Manager) - Using K/V pairs in 1 AWS Secret (#9039) 2025-03-06 19:30:18 -08:00
llm_translation [Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029) 2025-03-06 18:32:58 -08:00
load_tests (perf) Fix memory leak on /completions route (#8551) 2025-02-14 18:58:16 -08:00
local_testing Support openrouter reasoning_content on streaming (#9094) 2025-03-09 20:03:59 -07:00
logging_callback_tests Fix batches api cost tracking + Log batch models in spend logs / standard logging payload (#9077) 2025-03-08 11:47:25 -08:00
multi_instance_e2e_tests (e2e testing) - add tests for using litellm /team/ updates in multi-instance deployments with Redis (#8440) 2025-02-10 19:33:27 -08:00
old_proxy_tests/tests vertex testing use pathrise-convert-1606954137718 2025-01-05 14:00:17 -08:00
openai_misc_endpoints_tests test_e2e_batches_files 2025-03-06 08:42:17 -08:00
otel_tests test_user_email_metrics 2025-02-25 10:47:09 -08:00
pass_through_tests (Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013) 2025-03-06 00:43:08 -08:00
pass_through_unit_tests (Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013) 2025-03-06 00:43:08 -08:00
proxy_admin_ui_tests fix test_list_key_helper_team_filtering 2025-03-08 17:21:32 -08:00
proxy_security_tests (Security fix) - remove code block that inserts master key hash into DB (#8268) 2025-02-05 17:25:42 -08:00
proxy_unit_tests (Clean up) - Allow switching off storing Error Logs in DB (#9084) 2025-03-08 16:12:03 -08:00
router_unit_tests (Refactor) /v1/messages to follow simpler logic for Anthropic API spec (#9013) 2025-03-06 00:43:08 -08:00
store_model_in_db_tests test_chat_completion_bad_model_with_spend_logs 2025-02-28 20:19:43 -08:00
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684) 2025-02-20 21:23:54 -08:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py fix testing - langfuse apis are flaky, we unit test team / key based logging in test_langfuse_unit_tests.py 2024-12-03 11:24:36 -08:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py (Feat) - return x-litellm-attempted-fallbacks in responses from litellm proxy (#8558) 2025-02-15 14:54:23 -08:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py Improved wildcard route handling on /models and /model_group/info (#8473) 2025-02-11 19:37:43 -08:00
test_openai_endpoints.py (security fix) - Enforce model access restrictions on Azure OpenAI route (#8888) 2025-02-27 21:24:58 -08:00
test_organizations.py Add remaining org CRUD endpoints + support deleting orgs on UI (#8561) 2025-02-15 15:48:06 -08:00
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py (Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112) 2025-01-30 19:35:50 -08:00
test_spend_logs.py (feat) - track org_id in SpendLogs (#8253) 2025-02-04 21:08:05 -08:00
test_team.py fix(team_endpoints.py): ensure 404 raised when team not found (#9038) 2025-03-06 22:04:36 -08:00
test_team_logging.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_team_members.py test: add more unit testing for team member endpoints (#8170) 2025-02-01 11:23:00 -08:00
test_users.py Internal User Endpoint - vulnerability fix + response type fix (#8228) 2025-02-04 06:41:14 -08:00

In total litellm runs 1000+ tests

[02/20/2025] Update:

To make it easier to contribute and map what behavior is tested,

we've started mapping the litellm directory in tests/litellm

This folder can only run mock tests.