litellm-mirror/tests
Krish Dholakia ac4f32fb1e
Cost tracking for gemini-2.5-pro (#9837)
* build(model_prices_and_context_window.json): add google/gemini-2.0-flash-lite-001 versioned pricing

Closes https://github.com/BerriAI/litellm/issues/9829

* build(model_prices_and_context_window.json): add initial support for 'supported_output_modalities' param

* build(model_prices_and_context_window.json): add initial support for 'supported_output_modalities' param

* build(model_prices_and_context_window.json): add supported endpoints to gemini-2.5-pro

* build(model_prices_and_context_window.json): add gemini 200k+ pricing

* feat(utils.py): support cost calculation for gemini-2.5-pro above 200k tokens

Fixes https://github.com/BerriAI/litellm/issues/9807

* build: test dockerfile change

* build: revert apk change

* ci(config.yml): pip install wheel

* ci: test problematic package first

* ci(config.yml): pip install only binary

* ci: try more things

* ci: test different ml_dtypes version

* ci(config.yml): check ml_dtypes==0.4.0

* ci: test

* ci: cleanup config.yml

* ci: specify ml dtypes in requirements.txt

* ci: remove redisvl depedency (temporary)

* fix: fix linting errors

* test: update test

* test: fix test
2025-04-09 18:48:43 -07:00
..
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
batches_tests VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
code_coverage_tests fix(vertex_ai.py): move to only passing in accepted keys by vertex ai response schema (#8992) 2025-04-07 18:07:01 -07:00
documentation_tests Litellm dev 12 28 2024 p1 (#7463) 2024-12-28 20:26:00 -08:00
image_gen_tests get_base_image_generation_call_args 2025-04-02 21:04:06 -07:00
litellm Cost tracking for gemini-2.5-pro (#9837) 2025-04-09 18:48:43 -07:00
litellm_utils_tests Allow passing thinking param to litellm proxy via client sdk + Code QA Refactor on get_optional_params (get correct values) (#9386) 2025-04-07 21:04:11 -07:00
llm_responses_api_testing test_openai_o1_pro_response_api_streaming 2025-03-20 13:04:49 -07:00
llm_translation VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
load_tests fix vertex embedding perf test 2025-03-26 10:25:50 -07:00
local_testing Cost tracking for gemini-2.5-pro (#9837) 2025-04-09 18:48:43 -07:00
logging_callback_tests Realtime API Cost tracking (#9795) 2025-04-07 16:43:12 -07:00
mcp_tests test fixes 2025-03-29 18:34:58 -07:00
multi_instance_e2e_tests (e2e testing) - add tests for using litellm /team/ updates in multi-instance deployments with Redis (#8440) 2025-02-10 19:33:27 -08:00
old_proxy_tests/tests vertex testing use pathrise-convert-1606954137718 2025-01-05 14:00:17 -08:00
openai_endpoints_tests test_bad_request_bad_param_error 2025-03-13 16:02:21 -07:00
otel_tests _get_exception_class_name 2025-04-04 21:23:21 -07:00
pass_through_tests [Bug Fix] Add support for UploadFile on LLM Pass through endpoints (OpenAI, Azure etc) (#9853) 2025-04-09 15:29:20 -07:00
pass_through_unit_tests use new anthropic interface 2025-03-31 14:31:09 -07:00
proxy_admin_ui_tests test: update tests 2025-03-22 12:56:42 -07:00
proxy_security_tests (Security fix) - remove code block that inserts master key hash into DB (#8268) 2025-02-05 17:25:42 -08:00
proxy_unit_tests Move daily user transaction logging outside of 'disable_spend_logs' flag - different tables (#9772) 2025-04-05 09:58:16 -07:00
router_unit_tests fix(router.py): support reusable credentials via passthrough router (#9758) 2025-04-04 18:40:14 -07:00
spend_tracking_tests test_long_term_spend_accuracy_with_bursts 2025-03-31 21:09:29 -07:00
store_model_in_db_tests test_chat_completion_bad_model_with_spend_logs 2025-02-28 20:19:43 -08:00
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684) 2025-02-20 21:23:54 -08:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py fix testing - langfuse apis are flaky, we unit test team / key based logging in test_langfuse_unit_tests.py 2024-12-03 11:24:36 -08:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py test: run test earlier to catch error 2025-03-27 23:08:52 -07:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py test: fix test 2025-03-10 22:00:50 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py fix(model_management_endpoints.py): fix allowing team admins to update team models (#9697) 2025-04-01 22:28:15 -07:00
test_openai_endpoints.py test string checked for model access control 2025-03-10 20:04:18 -07:00
test_organizations.py Add remaining org CRUD endpoints + support deleting orgs on UI (#8561) 2025-02-15 15:48:06 -08:00
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py (Refactor / QA) - Use LoggingCallbackManager to append callbacks and ensure no duplicate callbacks are added (#8112) 2025-01-30 19:35:50 -08:00
test_spend_logs.py (feat) - track org_id in SpendLogs (#8253) 2025-02-04 21:08:05 -08:00
test_team.py fix(team_endpoints.py): ensure 404 raised when team not found (#9038) 2025-03-06 22:04:36 -08:00
test_team_logging.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_team_members.py test: add more unit testing for team member endpoints (#8170) 2025-02-01 11:23:00 -08:00
test_users.py Internal User Endpoint - vulnerability fix + response type fix (#8228) 2025-02-04 06:41:14 -08:00

In total litellm runs 1000+ tests

[02/20/2025] Update:

To make it easier to contribute and map what behavior is tested,

we've started mapping the litellm directory in tests/litellm

This folder can only run mock tests.