litellm/tests
Krish Dholakia 7e9d8b58f6
LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870)
* feat(pass_through_endpoints/): support logging anthropic/gemini pass through calls to langfuse/s3/etc.

* fix(utils.py): allow disabling end user cost tracking with new param

Allows proxy admin to disable cost tracking for end user - keeps prometheus metrics small

* docs(configs.md): add disable_end_user_cost_tracking reference to docs

* feat(key_management_endpoints.py): add support for restricting access to `/key/generate` by team/proxy level role

Enables admin to restrict key creation, and assign team admins to handle distributing keys

* test(test_key_management.py): add unit testing for personal / team key restriction checks

* docs: add docs on restricting key creation

* docs(finetuned_models.md): add new guide on calling finetuned models

* docs(input.md): cleanup anthropic supported params

Closes https://github.com/BerriAI/litellm/issues/6856

* test(test_embedding.py): add test for passing extra headers via embedding

* feat(cohere/embed): pass client to async embedding

* feat(rerank.py): add `/v1/rerank` if missing for cohere base url

Closes https://github.com/BerriAI/litellm/issues/6844

* fix(main.py): pass extra_headers param to openai

Fixes https://github.com/BerriAI/litellm/issues/6836

* fix(litellm_logging.py): don't disable global callbacks when dynamic callbacks are set

Fixes issue where global callbacks - e.g. prometheus were overriden when langfuse was set dynamically

* fix(handler.py): fix linting error

* fix: fix typing

* build: add conftest to proxy_admin_ui_tests/

* test: fix test

* fix: fix linting errors

* test: fix test

* fix: fix pass through testing
2024-11-23 15:17:40 +05:30
..
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
code_coverage_tests (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
documentation_tests Litellm dev 11 20 2024 (#6831) 2024-11-21 04:06:06 +05:30
image_gen_tests (fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855) 2024-11-21 19:03:02 -08:00
llm_translation Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
load_tests (load testing) add vertex_ai embeddings load test (#6004) 2024-10-03 14:39:15 +05:30
local_testing LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
logging_callback_tests LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
old_proxy_tests/tests Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
otel_tests (feat) prometheus have well defined latency buckets (#6211) 2024-10-14 17:16:01 +05:30
pass_through_tests (feat) use @google-cloud/vertexai js sdk with litellm (#6873) 2024-11-22 16:50:10 -08:00
pass_through_unit_tests LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
proxy_admin_ui_tests LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870) 2024-11-23 15:17:40 +05:30
proxy_unit_tests (fix) passthrough - allow internal users to access /anthropic (#6843) 2024-11-21 11:46:50 -08:00
router_unit_tests Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD Update README.MD 2024-03-29 14:56:41 -07:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py test_team_logging 2024-11-21 22:01:12 -08:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py fix(user_api_key_auth.py): ensure user has access to fallback models 2024-06-20 16:02:19 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py Litellm key update fix (#6710) 2024-11-14 00:42:37 +05:30
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_openai_batches_endpoint.py test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
test_openai_endpoints.py fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577) 2024-11-05 22:03:44 +05:30
test_openai_files_endpoints.py test - batches endpoint 2024-07-26 18:09:49 -07:00
test_openai_fine_tuning.py fix cancel ft job route 2024-07-31 16:19:15 -07:00
test_organizations.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py test(test_ratelimit.py): fix test to send below rpm 2024-04-30 19:35:21 -07:00
test_spend_logs.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_team.py Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
test_team_logging.py test: skip flaky test 2024-11-22 19:23:36 +05:30
test_users.py LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119) 2024-10-08 21:57:03 -07:00

In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.