litellm-mirror/litellm/litellm_core_utils
Krish Dholakia c3edfc2c92
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 35s
LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394)
* build(model_prices_and_context_window.json): add gemini-1.5-flash context caching

* fix(context_caching/transformation.py): just use last identified cache point

Fixes https://github.com/BerriAI/litellm/issues/6738

* fix(context_caching/transformation.py): pick first contiguous block - handles system message error from google

Fixes https://github.com/BerriAI/litellm/issues/6738

* fix(vertex_ai/gemini/): track context caching tokens

* refactor(gemini/): place transformation.py inside `chat/` folder

make it easy for user to know we support the equivalent endpoint

* fix: fix import

* refactor(vertex_ai/): move vertex_ai cost calc inside vertex_ai/ folder

make it easier to see cost calculation logic

* fix: fix linting errors

* fix: fix circular import

* feat(gemini/cost_calculator.py): support gemini context caching cost calculation

generifies anthropic's cost calculation function and uses it across anthropic + gemini

* build(model_prices_and_context_window.json): add cost tracking for gemini-1.5-flash-002 w/ context caching

Closes https://github.com/BerriAI/litellm/issues/6891

* docs(gemini.md): add gemini context caching architecture diagram

make it easier for user to understand how context caching works

* docs(gemini.md): link to relevant gemini context caching code

* docs(gemini/context_caching): add readme in github, make it easy for dev to know context caching is supported + where to go for code

* fix(llm_cost_calc/utils.py): handle gemini 128k token diff cost calc scenario

* fix(deepseek/cost_calculator.py): support deepseek context caching cost calculation

* test: fix test
2024-12-23 22:02:52 -08:00
..
audio_utils fix import error 2024-09-05 10:09:44 -07:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
llm_response_utils LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
prompt_templates (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
tokenizers Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
asyncify.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
core_helpers.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
default_encoding.py Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
duration_parser.py (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
exception_mapping_utils.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
get_llm_provider_logic.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
get_supported_openai_params.py [Bug fix ]: Triton /infer handler incompatible with batch responses (#7337) 2024-12-20 20:59:40 -08:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py (feat) Add basic logging support for /batches endpoints (#7381) 2024-12-23 17:45:03 -08:00
llm_request_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
logging_utils.py Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
mock_functions.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
README.md (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
realtime_streaming.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
redact_messages.py (feat) Allow enabling logging message / response for specific virtual keys (#7071) 2024-12-06 21:25:36 -08:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
rules.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
streaming_chunk_builder_utils.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
streaming_handler.py Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
token_counter.py fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00

Folder Contents

This folder contains general-purpose utilities that are used in multiple places in the codebase.

Core files:

  • streaming_handler.py: The core streaming logic + streaming related helper utils
  • core_helpers.py: code used in types/ - e.g. map_finish_reason.
  • exception_mapping_utils.py: utils for mapping exceptions to openai-compatible error types.
  • default_encoding.py: code for loading the default encoding (tiktoken)
  • get_llm_provider_logic.py: code for inferring the LLM provider from a given model name.
  • duration_parser.py: code for parsing durations - e.g. "1d", "1mo", "10s"