litellm-mirror/litellm/litellm_core_utils
Krish Dholakia f966e279a6 LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263)
* fix(factory.py): skip empty text blocks for bedrock user messages

Fixes https://github.com/BerriAI/litellm/issues/7169

* Add support for Gemini 2.0 GoogleSearch tool (#7257)

* Add support for google_search tool in gemini 2.0

* Add/modify tests

* Fix grounding check

* Remove 2.0 grounding test; exclude experimental model in VERTEX_MODELS_TO_NOT_TEST

* Swap order of tools

* DFix formatting

* fix(get_api_base.py): return api base in streaming response

Fixes https://github.com/BerriAI/litellm/issues/7249

Closes https://github.com/BerriAI/litellm/pull/7250

* fix(cost_calculator.py): only set base model to model if not none

Fixes https://github.com/BerriAI/litellm/issues/7223

* fix(cost_calculator.py): enforce stricter order when picking model for cost calculation

* fix(cost_calculator.py): fix '_select_model_name_for_cost_calc' to return model name with region name prefix if provided

* fix(utils.py): fix 'get_model_info()' to handle edge case where model name starts with custom llm provider AND custom llm provider is given

* fix(cost_calculator.py): handle `custom_llm_provider-` scenario

* fix(cost_calculator.py): e2e working tts cost tracking

ensures initial message is passed in, to cost calculator

* fix(factory.py): suppress linting errors

* fix(cost_calculator.py): strip llm provider from model name after selecting cost calc model

* fix(litellm_logging.py): store initial request in 'input' field + accept base_model to be passed in litellm_params directly

* test: handle none env var value in flaky test

* fix(litellm_logging.py): fix linting errors

---------

Co-authored-by: Sam B <samlingx@gmail.com>
2024-12-17 15:33:36 -08:00
..
audio_utils fix import error 2024-09-05 10:09:44 -07:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
llm_response_utils LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
prompt_templates LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
tokenizers Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
default_encoding.py Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
duration_parser.py (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
exception_mapping_utils.py Litellm dev 12 13 2024 p1 (#7219) 2024-12-13 19:01:28 -08:00
get_llm_provider_logic.py Litellm merge pr (#7161) 2024-12-10 22:49:26 -08:00
get_supported_openai_params.py fix(get_supported_openai_params.py): cleanup (#7176) 2024-12-11 01:15:53 -08:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
llm_request_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
logging_utils.py (refactor) use helper function _assemble_complete_response_from_streaming_chunks to assemble complete responses in caching and logging callbacks (#6220) 2024-10-15 12:45:12 +05:30
mock_functions.py test(router_code_coverage.py): check if all router functions are dire… (#6186) 2024-10-14 22:44:00 -07:00
README.md (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
realtime_streaming.py Litellm dev 10 22 2024 (#6384) 2024-10-22 21:18:54 -07:00
redact_messages.py (feat) Allow enabling logging message / response for specific virtual keys (#7071) 2024-12-06 21:25:36 -08:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
rules.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
streaming_chunk_builder_utils.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7051) 2024-12-06 14:29:53 -08:00
streaming_handler.py LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
token_counter.py fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00

Folder Contents

This folder contains general-purpose utilities that are used in multiple places in the codebase.

Core files:

  • streaming_handler.py: The core streaming logic + streaming related helper utils
  • core_helpers.py: code used in types/ - e.g. map_finish_reason.
  • exception_mapping_utils.py: utils for mapping exceptions to openai-compatible error types.
  • default_encoding.py: code for loading the default encoding (tiktoken)
  • get_llm_provider_logic.py: code for inferring the LLM provider from a given model name.
  • duration_parser.py: code for parsing durations - e.g. "1d", "1mo", "10s"