litellm/tests
Ishaan Jaff 7f4dfe434a
[Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754)
* add requester_metadata in standard logging payload

* log requester_metadata in metadata

* use StandardLoggingPayload for logging

* docs StandardLoggingPayload

* fix import

* include standard logging object in failure

* add test for requester metadata

* handle completion_tokens_details

* add test for completion_tokens_details
2024-09-17 20:23:14 -07:00
..
llm_translation [Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754) 2024-09-17 20:23:14 -07:00
load_tests fic otel load test % 2024-09-14 18:04:28 -07:00
otel_tests fix team based tag routing 2024-08-29 14:37:44 -07:00
pass_through_tests add test for pass through streaming usage tracking 2024-09-02 16:17:49 -07:00
proxy_admin_ui_tests add test test_regenerate_key_ui 2024-09-10 09:12:03 -07:00
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD Update README.MD 2024-03-29 14:56:41 -07:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py mark test_team_logging as flaky 2024-09-04 20:29:21 -07:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py test(test_end_users.py): fix test 2024-07-13 21:46:19 -07:00
test_entrypoint.py refactor secret managers 2024-09-03 10:58:02 -07:00
test_fallbacks.py fix(user_api_key_auth.py): ensure user has access to fallback models 2024-06-20 16:02:19 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py mark test_key_info_spend_values_streaming as flaky 2024-08-29 14:39:53 -07:00
test_models.py test - OpenAI client is re-used for Azure, OpenAI 2024-05-10 13:43:19 -07:00
test_openai_batches_endpoint.py test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
test_openai_endpoints.py Litellm fix router testing (#5748) 2024-09-17 18:02:23 -07:00
test_openai_files_endpoints.py test - batches endpoint 2024-07-26 18:09:49 -07:00
test_openai_fine_tuning.py fix cancel ft job route 2024-07-31 16:19:15 -07:00
test_organizations.py test(test_organizations.py): add testing for /organization/new endpoint 2024-03-02 12:13:54 -08:00
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py test(test_ratelimit.py): fix test to send below rpm 2024-04-30 19:35:21 -07:00
test_spend_logs.py test /spend/report 2024-05-13 15:26:39 -07:00
test_team.py fix(management/utils.py): fix add_member to team when adding user_email 2024-08-10 17:12:09 -07:00
test_team_logging.py mark test_team_logging as flaky 2024-09-04 20:29:21 -07:00
test_users.py refactor(test_users.py): refactor test for user info to use mock endpoints 2024-08-12 18:48:43 -07:00

In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.