forked from phoenix/litellm-mirror
* feat(proxy_cli.py): add new 'log_config' cli param Allows passing logging.conf to uvicorn on startup * docs(cli.md): add logging conf to uvicorn cli docs * fix(get_llm_provider_logic.py): fix default api base for litellm_proxy Fixes https://github.com/BerriAI/litellm/issues/6332 * feat(openai_like/embedding): Add support for jina ai embeddings Closes https://github.com/BerriAI/litellm/issues/6337 * docs(deploy.md): update entrypoint.sh filepath post-refactor Fixes outdated docs * feat(prometheus.py): emit time_to_first_token metric on prometheus Closes https://github.com/BerriAI/litellm/issues/6334 * fix(prometheus.py): only emit time to first token metric if stream is True enables more accurate ttft usage * test: handle vertex api instability * fix(get_llm_provider_logic.py): fix import * fix(openai.py): fix deepinfra default api base * fix(anthropic/transformation.py): remove anthropic beta header (#6361) |
||
---|---|---|
.. | ||
code_coverage_tests | ||
documentation_tests | ||
llm_translation | ||
load_tests | ||
local_testing | ||
logging_callback_tests | ||
old_proxy_tests/tests | ||
otel_tests | ||
pass_through_tests | ||
proxy_admin_ui_tests | ||
router_unit_tests | ||
gettysburg.wav | ||
large_text.py | ||
openai_batch_completions.jsonl | ||
README.MD | ||
test_callbacks_on_proxy.py | ||
test_config.py | ||
test_debug_warning.py | ||
test_end_users.py | ||
test_entrypoint.py | ||
test_fallbacks.py | ||
test_health.py | ||
test_keys.py | ||
test_logging.conf | ||
test_models.py | ||
test_openai_batches_endpoint.py | ||
test_openai_endpoints.py | ||
test_openai_files_endpoints.py | ||
test_openai_fine_tuning.py | ||
test_organizations.py | ||
test_passthrough_endpoints.py | ||
test_ratelimit.py | ||
test_spend_logs.py | ||
test_team.py | ||
test_team_logging.py | ||
test_users.py |
In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.