litellm/tests
Krish Dholakia 7e5085dc7b
Litellm dev 11 21 2024 (#6837)
* Fix Vertex AI function calling invoke: use JSON format instead of protobuf text format. (#6702)

* test: test tool_call conversion when arguments is empty dict

Fixes https://github.com/BerriAI/litellm/issues/6833

* fix(openai_like/handler.py): return more descriptive error message

Fixes https://github.com/BerriAI/litellm/issues/6812

* test: skip overloaded model

* docs(anthropic.md): update anthropic docs to show how to route to any new model

* feat(groq/): fake stream when 'response_format' param is passed

Groq doesn't support streaming when response_format is set

* feat(groq/): add response_format support for groq

Closes https://github.com/BerriAI/litellm/issues/6845

* fix(o1_handler.py): remove fake streaming for o1

Closes https://github.com/BerriAI/litellm/issues/6801

* build(model_prices_and_context_window.json): add groq llama3.2b model pricing

Closes https://github.com/BerriAI/litellm/issues/6807

* fix(utils.py): fix handling ollama response format param

Fixes https://github.com/BerriAI/litellm/issues/6848#issuecomment-2491215485

* docs(sidebars.js): refactor chat endpoint placement

* fix: fix linting errors

* test: fix test

* test: fix test

* fix(openai_like/handler): handle max retries

* fix(streaming_handler.py): fix streaming check for openai-compatible providers

* test: update test

* test: correctly handle model is overloaded error

* test: update test

* test: fix test

* test: mark flaky test

---------

Co-authored-by: Guowang Li <Guowang@users.noreply.github.com>
2024-11-22 01:53:52 +05:30
..
anthropic_passthrough (testing) - add e2e tests for anthropic pass through endpoints (#6840) 2024-11-20 17:55:13 -08:00
basic_proxy_startup_tests (fix) don't block proxy startup if license check fails & using prometheus (#6839) 2024-11-20 17:55:39 -08:00
code_coverage_tests (QOL improvement) add unit testing for all static_methods in litellm_logging.py (#6640) 2024-11-07 16:26:53 -08:00
documentation_tests Litellm dev 11 20 2024 (#6831) 2024-11-21 04:06:06 +05:30
image_gen_tests (feat) Add cost tracking for Azure Dall-e-3 Image Generation + use base class to ensure basic image generation tests pass (#6716) 2024-11-12 20:02:16 -08:00
llm_translation Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
load_tests (load testing) add vertex_ai embeddings load test (#6004) 2024-10-03 14:39:15 +05:30
local_testing Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
logging_callback_tests LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) 2024-11-15 11:18:31 +05:30
old_proxy_tests/tests Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
otel_tests (feat) prometheus have well defined latency buckets (#6211) 2024-10-14 17:16:01 +05:30
pass_through_tests (testing) - add e2e tests for anthropic pass through endpoints (#6840) 2024-11-20 17:55:13 -08:00
proxy_admin_ui_tests (fix) passthrough - allow internal users to access /anthropic (#6843) 2024-11-21 11:46:50 -08:00
proxy_unit_tests (fix) passthrough - allow internal users to access /anthropic (#6843) 2024-11-21 11:46:50 -08:00
router_unit_tests Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
gettysburg.wav feat(main.py): support openai transcription endpoints 2024-03-08 10:25:19 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
openai_batch_completions.jsonl feat(router.py): Support Loadbalancing batch azure api endpoints (#5469) 2024-09-02 21:32:55 -07:00
README.MD Update README.MD 2024-03-29 14:56:41 -07:00
test_callbacks_on_proxy.py fix - test num callbacks 2024-05-17 22:06:51 -07:00
test_config.py mark test_team_logging as flaky 2024-09-04 20:29:21 -07:00
test_debug_warning.py fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
test_end_users.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_entrypoint.py (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110) 2024-10-08 11:34:43 +05:30
test_fallbacks.py fix(user_api_key_auth.py): ensure user has access to fallback models 2024-06-20 16:02:19 -07:00
test_health.py (test) /health/readiness 2024-01-29 15:27:25 -08:00
test_keys.py Litellm key update fix (#6710) 2024-11-14 00:42:37 +05:30
test_logging.conf feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
test_models.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_openai_batches_endpoint.py test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
test_openai_endpoints.py fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577) 2024-11-05 22:03:44 +05:30
test_openai_files_endpoints.py test - batches endpoint 2024-07-26 18:09:49 -07:00
test_openai_fine_tuning.py fix cancel ft job route 2024-07-31 16:19:15 -07:00
test_organizations.py test: fix test 2024-11-20 14:13:14 +05:30
test_passthrough_endpoints.py test test_basic_passthrough 2024-08-06 21:17:07 -07:00
test_ratelimit.py test(test_ratelimit.py): fix test to send below rpm 2024-04-30 19:35:21 -07:00
test_spend_logs.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
test_team.py Litellm dev 11 08 2024 (#6658) 2024-11-08 22:07:17 +05:30
test_team_logging.py Litellm key update fix (#6710) 2024-11-14 00:42:37 +05:30
test_users.py LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119) 2024-10-08 21:57:03 -07:00

In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.