litellm/tests
Ishaan Jaff c73ce95c01
(feat) - provider budget improvements - ensure provider budgets work with multiple proxy instances + improve latency to ~90ms (#6886)
* use 1 file for duration_in_seconds

* add to readme.md

* re use duration_in_seconds

* fix importing _extract_from_regex, get_last_day_of_month

* fix import

* update provider budget routing

* fix - remove dup test

* add support for using in multi instance environments

* test_in_memory_redis_sync_e2e

* test_in_memory_redis_sync_e2e

* fix test_in_memory_redis_sync_e2e

* fix code quality check

* fix test provider budgets

* working provider budget tests

* add fixture for provider budget routing

* fix router testing for provider budgets

* add comments on provider budget routing

* use RedisPipelineIncrementOperation

* add redis async_increment_pipeline

* use redis async_increment_pipeline

* use lower value for testing

* use redis async_increment_pipeline

* use consistent key name for increment op

* add handling for budget windows

* fix typing async_increment_pipeline

* fix set attr

* add clear doc strings

* unit testing for provider budgets

* test_redis_increment_pipeline
2024-11-24 16:36:19 -08:00
..
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
code_coverage_tests (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
documentation_tests Litellm dev 11 20 2024 (#6831) 2024-11-21 04:06:06 +05:30
image_gen_tests (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
llm_translation Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
load_tests (load testing) add vertex_ai embeddings load test (#6004) 2024-10-03 14:39:15 +05:30
local_testing (feat) - provider budget improvements - ensure provider budgets work with multiple proxy instances + improve latency to ~90ms (#6886) 2024-11-24 16:36:19 -08:00
logging_callback_tests LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
old_proxy_tests/tests Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
otel_tests (feat) prometheus have well defined latency buckets (#6211) 2024-10-14 17:16:01 +05:30
pass_through_tests (feat) use @google-cloud/vertexai js sdk with litellm (#6873) 2024-11-22 16:50:10 -08:00
pass_through_unit_tests LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
proxy_admin_ui_tests Litellm dev 11 23 2024 (#6881) 2024-11-23 22:37:16 +05:30
proxy_unit_tests (fix) passthrough - allow internal users to access /anthropic (#6843) 2024-11-21 11:46:50 -08:00
router_unit_tests Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD Update README.MD 2024-03-29 14:56:41 -07:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py test_team_logging 2024-11-21 22:01:12 -08:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py fix(user_api_key_auth.py): ensure user has access to fallback models 2024-06-20 16:02:19 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py Litellm key update fix (#6710) 2024-11-14 00:42:37 +05:30
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_openai_batches_endpoint.py test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
test_openai_endpoints.py fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577) 2024-11-05 22:03:44 +05:30
test_openai_files_endpoints.py test - batches endpoint 2024-07-26 18:09:49 -07:00
test_openai_fine_tuning.py fix cancel ft job route 2024-07-31 16:19:15 -07:00
test_organizations.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py test(test_ratelimit.py): fix test to send below rpm 2024-04-30 19:35:21 -07:00
test_spend_logs.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_team.py Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
test_team_logging.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_users.py LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119) 2024-10-08 21:57:03 -07:00

In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.