litellm/.circleci
Ishaan Jaff 4e88fd65e1
(feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039)
* add prompt_tokens_details in usage response

* use _prompt_tokens_details as a param in Usage

* fix linting errors

* fix type error

* fix ci/cd deps

* bump deps for openai

* bump deps openai

* fix llm translation testing

* fix llm translation embedding
2024-10-03 23:31:10 +05:30
..
config.yml (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
requirements.txt (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30