litellm-mirror/litellm/types
Ishaan Jaff dee18cbf31 (feat) add cost tracking for OpenAI prompt caching (#6055)
* add cache_read_input_token_cost for prompt caching models

* add prompt caching for latest models

* add openai cost calculator

* add openai prompt caching test

* fix lint check

* add not on how usage._cache_read_input_tokens is used

* fix cost calc whisper openai

* use output_cost_per_second

* add input_cost_per_second
2024-10-05 14:20:15 +05:30
..
llms LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
adapter.py feat(anthropic_adapter.py): support for translating anthropic params to openai format 2024-07-10 00:32:28 -07:00
completion.py LiteLLM Minor Fixes and Improvements (09/12/2024) (#5658) 2024-09-12 23:04:06 -07:00
embedding.py Removed config dict type definition 2024-05-17 10:39:00 +08:00
files.py Fix file type handling of uppercase extensions 2024-06-13 15:00:16 -07:00
guardrails.py LiteLLM Minor Fixes & Improvements (09/17/2024) (#5742) 2024-09-17 23:00:04 -07:00
router.py (fix proxy) model_group/info support rerank models (#5955) 2024-09-28 10:54:43 -07:00
services.py add new BATCH_WRITE_TO_DB type for service logger 2024-07-27 11:36:51 -07:00
utils.py (feat) add cost tracking for OpenAI prompt caching (#6055) 2024-10-05 14:20:15 +05:30