forked from phoenix/litellm-mirror
(feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039)
* add prompt_tokens_details in usage response * use _prompt_tokens_details as a param in Usage * fix linting errors * fix type error * fix ci/cd deps * bump deps for openai * bump deps openai * fix llm translation testing * fix llm translation embedding
This commit is contained in:
parent
9fccb4a0da
commit
4e88fd65e1
10 changed files with 1515 additions and 1428 deletions
2833
poetry.lock
generated
2833
poetry.lock
generated
File diff suppressed because it is too large
Load diff
Loading…
Add table
Add a link
Reference in a new issue