litellm/tests/llm_translation
Krish Dholakia 9695c1af10
LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119)
* refactor(cost_calculator.py): move error line to debug - https://github.com/BerriAI/litellm/issues/5683#issuecomment-2398599498

* fix(migrate-hidden-params-to-read-from-standard-logging-payload): Fixes https://github.com/BerriAI/litellm/issues/5546#issuecomment-2399994026

* fix(types/utils.py): mark weight as a litellm param

Fixes https://github.com/BerriAI/litellm/issues/5781

* feat(internal_user_endpoints.py): fix /user/info + show user max budget as default max budget

Fixes https://github.com/BerriAI/litellm/issues/6117

* feat: support returning team member budget in `/user/info`

Sets user max budget in team as max budget on ui

  Closes https://github.com/BerriAI/litellm/issues/6117

* bug fix for optional parameter passing to replicate (#6067)

Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>

* fix(o1_transformation.py): handle o1 temperature=0

o1 doesn't support temp=0, allow admin to drop this param

* test: fix test

---------

Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>
Co-authored-by: Mandana Vaziri <mvaziri@us.ibm.com>
2024-10-08 21:57:03 -07:00
..
conftest.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
Readme.md LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731) 2024-09-17 08:05:52 -07:00
test_anthropic_completion.py LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
test_databricks.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_fireworks_ai_translation.py LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772) 2024-09-19 13:25:29 -07:00
test_max_completion_tokens.py (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
test_nvidia_nim.py (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
test_openai_o1.py [Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754) 2024-09-17 20:23:14 -07:00
test_optional_params.py LiteLLM Minor Fixes & Improvements (10/08/2024) (#6119) 2024-10-08 21:57:03 -07:00
test_prompt_caching.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_supports_vision.py [Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821) 2024-09-21 11:35:55 -07:00

More tests under litellm/litellm/tests/*.