mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
* add cache_read_input_token_cost for prompt caching models * add prompt caching for latest models * add openai cost calculator * add openai prompt caching test * fix lint check * add not on how usage._cache_read_input_tokens is used * fix cost calc whisper openai * use output_cost_per_second * add input_cost_per_second |
||
---|---|---|
.. | ||
llms | ||
adapter.py | ||
completion.py | ||
embedding.py | ||
files.py | ||
guardrails.py | ||
router.py | ||
services.py | ||
utils.py |