litellm-mirror/tests
Krish Dholakia 4d89da9c97 Deepseek r1 support + watsonx qa improvements (#7907)
* fix(types/utils.py): support returning 'reasoning_content' for deepseek models

Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218

* fix(convert_dict_to_response.py): return deepseek response in provider_specific_field

allows for separating openai vs. non-openai params in model response

* fix(utils.py): support 'provider_specific_field' in delta chunk as well

allows deepseek reasoning content chunk to be returned to user from stream as well

Fixes https://github.com/BerriAI/litellm/issues/7877#issuecomment-2603813218

* fix(watsonx/chat/handler.py): fix passing space id to watsonx on chat route

* fix(watsonx/): fix watsonx_text/ route with space id

* fix(watsonx/): qa item - also adds better unit testing for watsonx embedding calls

* fix(utils.py): rename to '..fields'

* fix: fix linting errors

* fix(utils.py): fix typing - don't show provider-specific field if none or empty - prevents default respons
e from being non-oai compatible

* fix: cleanup unused imports

* docs(deepseek.md): add docs for deepseek reasoning model
2025-01-21 23:13:15 -08:00
..
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
batches_tests (Feat - Batches API) add support for retrieving vertex api batch jobs (#7661) 2025-01-09 18:35:03 -08:00
code_coverage_tests (Code quality) - Ban recursive functions in codebase (#7910) 2025-01-21 20:33:32 -08:00
documentation_tests Litellm dev 12 28 2024 p1 (#7463) 2024-12-28 20:26:00 -08:00
image_gen_tests Litellm dev 01 21 2025 p1 (#7898) 2025-01-21 20:36:11 -08:00
litellm_utils_tests (Feat) Add x-litellm-overhead-duration-ms and "x-litellm-response-duration-ms" in response from LiteLLM (#7899) 2025-01-21 20:27:55 -08:00
llm_translation Deepseek r1 support + watsonx qa improvements (#7907) 2025-01-21 23:13:15 -08:00
load_tests [BETA] Add OpenAI /images/variations + Topaz API support (#7700) 2025-01-11 23:27:46 -08:00
local_testing Deepseek r1 support + watsonx qa improvements (#7907) 2025-01-21 23:13:15 -08:00
logging_callback_tests litellm_overhead_latency_metric 2025-01-21 20:51:57 -08:00
old_proxy_tests/tests vertex testing use pathrise-convert-1606954137718 2025-01-05 14:00:17 -08:00
openai_misc_endpoints_tests test_e2e_batches_files 2024-12-28 19:54:04 -08:00
otel_tests (Feat - prometheus) - emit litellm_overhead_latency_metric (#7913) 2025-01-21 20:36:30 -08:00
pass_through_tests test: initial commit enforcing testing on all anthropic pass through … (#7794) 2025-01-15 22:02:35 -08:00
pass_through_unit_tests test: fix unit test 2025-01-16 21:11:17 -08:00
proxy_admin_ui_tests e2e ui testing fixes 2025-01-18 07:46:55 -08:00
proxy_unit_tests Litellm dev 01 21 2025 p1 (#7898) 2025-01-21 20:36:11 -08:00
router_unit_tests Improve Proxy Resiliency: Cooldown single-deployment model groups if 100% calls failed in high traffic (#7823) 2025-01-17 20:17:02 -08:00
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD Update README.MD 2024-03-29 14:56:41 -07:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py fix testing - langfuse apis are flaky, we unit test team / key based logging in test_langfuse_unit_tests.py 2024-12-03 11:24:36 -08:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py fix(user_api_key_auth.py): ensure user has access to fallback models 2024-06-20 16:02:19 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py fix(proxy_server.py): fix get model info when litellm_model_id is set + move model analytics to free (#7886) 2025-01-21 08:19:07 -08:00
test_openai_endpoints.py ci/cd run again 2024-12-27 14:53:10 -08:00
test_organizations.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py test(test_ratelimit.py): fix test to send below rpm 2024-04-30 19:35:21 -07:00
test_spend_logs.py LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
test_team.py Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
test_team_logging.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_users.py LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119) 2024-10-08 21:57:03 -07:00

In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.