forked from phoenix/litellm-mirror
* fix(langfuse.py): support new langfuse prompt_chat class init params * fix(langfuse.py): handle new init values on prompt chat + prompt text templates fixes error caused during langfuse logging * docs(openai_compatible.md): clarify `openai/` handles correct routing for `/v1/completions` route Fixes https://github.com/BerriAI/litellm/issues/5876 * fix(utils.py): handle unmapped gemini model optional param translation Fixes https://github.com/BerriAI/litellm/issues/5888 * fix(o1_transformation.py): fix o-1 validation, to not raise error if temperature=1 Fixes https://github.com/BerriAI/litellm/issues/5884 * fix(prisma_client.py): refresh iam token Fixes https://github.com/BerriAI/litellm/issues/5896 * fix: pass drop params where required * fix(utils.py): pass drop_params correctly * fix(types/vertex_ai.py): fix generation config * test(test_max_completion_tokens.py): fix test * fix(vertex_and_google_ai_studio_gemini.py): fix map openai params |
||
---|---|---|
.. | ||
llm_translation | ||
load_tests | ||
otel_tests | ||
pass_through_tests | ||
proxy_admin_ui_tests | ||
gettysburg.wav | ||
large_text.py | ||
openai_batch_completions.jsonl | ||
README.MD | ||
test_callbacks_on_proxy.py | ||
test_config.py | ||
test_debug_warning.py | ||
test_end_users.py | ||
test_entrypoint.py | ||
test_fallbacks.py | ||
test_health.py | ||
test_keys.py | ||
test_models.py | ||
test_openai_batches_endpoint.py | ||
test_openai_endpoints.py | ||
test_openai_files_endpoints.py | ||
test_openai_fine_tuning.py | ||
test_organizations.py | ||
test_passthrough_endpoints.py | ||
test_ratelimit.py | ||
test_spend_logs.py | ||
test_team.py | ||
test_team_logging.py | ||
test_users.py |
In total litellm runs 500+ tests Most tests are in /litellm/tests. These are just the tests for the proxy docker image, used for circle ci.