.. |
code_coverage_tests
|
Litellm router code coverage 3 (#6274)
|
2024-10-16 21:30:25 -07:00 |
documentation_tests
|
docs(configs.md): document all environment variables (#6185)
|
2024-10-13 09:57:03 -07:00 |
llm_translation
|
Litellm dev 10 22 2024 (#6384)
|
2024-10-22 21:18:54 -07:00 |
load_tests
|
(load testing) add vertex_ai embeddings load test (#6004)
|
2024-10-03 14:39:15 +05:30 |
local_testing
|
(fix) Langfuse key based logging (#6372)
|
2024-10-23 18:24:22 +05:30 |
logging_callback_tests
|
(fix) Langfuse key based logging (#6372)
|
2024-10-23 18:24:22 +05:30 |
old_proxy_tests/tests
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
otel_tests
|
(feat) prometheus have well defined latency buckets (#6211)
|
2024-10-14 17:16:01 +05:30 |
pass_through_tests
|
test(skip-flaky-google-context-caching-test): google is not reliable. their sample code is also not working
|
2024-10-22 12:06:30 -07:00 |
proxy_admin_ui_tests
|
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208)
|
2024-10-14 16:34:01 +05:30 |
router_unit_tests
|
Litellm router code coverage 3 (#6274)
|
2024-10-16 21:30:25 -07:00 |
gettysburg.wav
|
feat(main.py): support openai transcription endpoints
|
2024-03-08 10:25:19 -08:00 |
large_text.py
|
fix(router.py): check for context window error when handling 400 status code errors
|
2024-03-26 08:08:15 -07:00 |
openai_batch_completions.jsonl
|
feat(router.py): Support Loadbalancing batch azure api endpoints (#5469)
|
2024-09-02 21:32:55 -07:00 |
README.MD
|
Update README.MD
|
2024-03-29 14:56:41 -07:00 |
test_callbacks_on_proxy.py
|
fix - test num callbacks
|
2024-05-17 22:06:51 -07:00 |
test_config.py
|
mark test_team_logging as flaky
|
2024-09-04 20:29:21 -07:00 |
test_debug_warning.py
|
fix(utils.py): fix togetherai streaming cost calculation
|
2024-08-01 15:03:08 -07:00 |
test_end_users.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
test_entrypoint.py
|
(fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker (#6110)
|
2024-10-08 11:34:43 +05:30 |
test_fallbacks.py
|
fix(user_api_key_auth.py): ensure user has access to fallback models
|
2024-06-20 16:02:19 -07:00 |
test_health.py
|
(test) /health/readiness
|
2024-01-29 15:27:25 -08:00 |
test_keys.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
test_logging.conf
|
feat(proxy_cli.py): add new 'log_config' cli param (#6352)
|
2024-10-21 21:25:58 -07:00 |
test_models.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
test_openai_batches_endpoint.py
|
test batches endpoint on proxy
|
2024-07-30 09:46:30 -07:00 |
test_openai_endpoints.py
|
Litellm fix router testing (#5748)
|
2024-09-17 18:02:23 -07:00 |
test_openai_files_endpoints.py
|
test - batches endpoint
|
2024-07-26 18:09:49 -07:00 |
test_openai_fine_tuning.py
|
fix cancel ft job route
|
2024-07-31 16:19:15 -07:00 |
test_organizations.py
|
(feat proxy) [beta] add support for organization role based access controls (#6112)
|
2024-10-09 15:18:18 +05:30 |
test_passthrough_endpoints.py
|
test test_basic_passthrough
|
2024-08-06 21:17:07 -07:00 |
test_ratelimit.py
|
test(test_ratelimit.py): fix test to send below rpm
|
2024-04-30 19:35:21 -07:00 |
test_spend_logs.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
test_team.py
|
LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119)
|
2024-10-08 21:57:03 -07:00 |
test_team_logging.py
|
mark test_team_logging as flaky
|
2024-09-04 20:29:21 -07:00 |
test_users.py
|
LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119)
|
2024-10-08 21:57:03 -07:00 |