.. |
basic_proxy_startup_tests
|
(fix) don't block proxy startup if license check fails & using prometheus (#6839)
|
2024-11-20 17:55:39 -08:00 |
code_coverage_tests
|
(Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874)
|
2024-11-22 18:47:26 -08:00 |
documentation_tests
|
Litellm dev 11 20 2024 (#6831)
|
2024-11-21 04:06:06 +05:30 |
image_gen_tests
|
(fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855)
|
2024-11-21 19:03:02 -08:00 |
llm_translation
|
Litellm dev 11 21 2024 (#6837)
|
2024-11-22 01:53:52 +05:30 |
load_tests
|
(load testing) add vertex_ai embeddings load test (#6004)
|
2024-10-03 14:39:15 +05:30 |
local_testing
|
(feat) - provider budget improvements - ensure provider budgets work with multiple proxy instances + improve latency to ~90ms (#6886)
|
2024-11-24 16:36:19 -08:00 |
logging_callback_tests
|
LiteLLM Minor Fixes & Improvements (11/23/2024) (#6870)
|
2024-11-23 15:17:40 +05:30 |
old_proxy_tests/tests
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
otel_tests
|
(feat) prometheus have well defined latency buckets (#6211)
|
2024-10-14 17:16:01 +05:30 |
pass_through_tests
|
(feat) Add support for using @google/generative-ai JS with LiteLLM Proxy (#6899)
|
2024-11-25 13:13:03 -08:00 |
pass_through_unit_tests
|
feat - allow sending tags on vertex pass through requests (#6876)
|
2024-11-25 12:12:09 -08:00 |
proxy_admin_ui_tests
|
Litellm dev 11 23 2024 (#6881)
|
2024-11-23 22:37:16 +05:30 |
proxy_unit_tests
|
add coverage
|
2024-11-25 15:48:40 -08:00 |
router_unit_tests
|
Litellm dev 11 08 2024 (#6658)
|
2024-11-08 22:07:17 +05:30 |
gettysburg.wav
|
feat(main.py): support openai transcription endpoints
|
2024-03-08 10:25:19 -08:00 |
large_text.py
|
fix(router.py): check for context window error when handling 400 status code errors
|
2024-03-26 08:08:15 -07:00 |
openai_batch_completions.jsonl
|
feat(router.py): Support Loadbalancing batch azure api endpoints (#5469)
|
2024-09-02 21:32:55 -07:00 |
README.MD
|
Update README.MD
|
2024-03-29 14:56:41 -07:00 |
test_callbacks_on_proxy.py
|
fix - test num callbacks
|
2024-05-17 22:06:51 -07:00 |
test_config.py
|
test_team_logging
|
2024-11-21 22:01:12 -08:00 |
test_debug_warning.py
|
fix(utils.py): fix togetherai streaming cost calculation
|
2024-08-01 15:03:08 -07:00 |
test_end_users.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
test_entrypoint.py
|
(fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110)
|
2024-10-08 11:34:43 +05:30 |
test_fallbacks.py
|
fix(user_api_key_auth.py): ensure user has access to fallback models
|
2024-06-20 16:02:19 -07:00 |
test_health.py
|
(test) /health/readiness
|
2024-01-29 15:27:25 -08:00 |
test_keys.py
|
Litellm key update fix (#6710)
|
2024-11-14 00:42:37 +05:30 |
test_logging.conf
|
feat(proxy_cli.py): add new 'log_config' cli param (#6352)
|
2024-10-21 21:25:58 -07:00 |
test_models.py
|
add e2e tests for keys with regex patterns for /models and /model/info
|
2024-11-27 18:38:36 -08:00 |
test_openai_batches_endpoint.py
|
test batches endpoint on proxy
|
2024-07-30 09:46:30 -07:00 |
test_openai_endpoints.py
|
add test_regex_pattern_matching_e2e_test
|
2024-11-25 15:39:42 -08:00 |
test_openai_files_endpoints.py
|
test - batches endpoint
|
2024-07-26 18:09:49 -07:00 |
test_openai_fine_tuning.py
|
fix cancel ft job route
|
2024-07-31 16:19:15 -07:00 |
test_organizations.py
|
test: skip flaky test
|
2024-11-22 19:23:36 +05:30 |
test_passthrough_endpoints.py
|
test test_basic_passthrough
|
2024-08-06 21:17:07 -07:00 |
test_ratelimit.py
|
test(test_ratelimit.py): fix test to send below rpm
|
2024-04-30 19:35:21 -07:00 |
test_spend_logs.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
test_team.py
|
Litellm dev 11 08 2024 (#6658)
|
2024-11-08 22:07:17 +05:30 |
test_team_logging.py
|
test: skip flaky test
|
2024-11-22 19:23:36 +05:30 |
test_users.py
|
LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119)
|
2024-10-08 21:57:03 -07:00 |