litellm/tests/llm_translation
Ishaan Jaff 4e88fd65e1
(feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039)
* add prompt_tokens_details in usage response

* use _prompt_tokens_details as a param in Usage

* fix linting errors

* fix type error

* fix ci/cd deps

* bump deps for openai

* bump deps openai

* fix llm translation testing

* fix llm translation embedding
2024-10-03 23:31:10 +05:30
..
conftest.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
Readme.md LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731) 2024-09-17 08:05:52 -07:00
test_anthropic_completion.py LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
test_databricks.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_fireworks_ai_translation.py LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772) 2024-09-19 13:25:29 -07:00
test_max_completion_tokens.py (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
test_nvidia_nim.py (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
test_openai_o1.py [Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754) 2024-09-17 20:23:14 -07:00
test_optional_params.py LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
test_prompt_caching.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_supports_vision.py [Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821) 2024-09-21 11:35:55 -07:00

More tests under litellm/litellm/tests/*.