mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
* fix(factory.py): skip empty text blocks for bedrock user messages Fixes https://github.com/BerriAI/litellm/issues/7169 * Add support for Gemini 2.0 GoogleSearch tool (#7257) * Add support for google_search tool in gemini 2.0 * Add/modify tests * Fix grounding check * Remove 2.0 grounding test; exclude experimental model in VERTEX_MODELS_TO_NOT_TEST * Swap order of tools * DFix formatting * fix(get_api_base.py): return api base in streaming response Fixes https://github.com/BerriAI/litellm/issues/7249 Closes https://github.com/BerriAI/litellm/pull/7250 * fix(cost_calculator.py): only set base model to model if not none Fixes https://github.com/BerriAI/litellm/issues/7223 * fix(cost_calculator.py): enforce stricter order when picking model for cost calculation * fix(cost_calculator.py): fix '_select_model_name_for_cost_calc' to return model name with region name prefix if provided * fix(utils.py): fix 'get_model_info()' to handle edge case where model name starts with custom llm provider AND custom llm provider is given * fix(cost_calculator.py): handle `custom_llm_provider-` scenario * fix(cost_calculator.py): e2e working tts cost tracking ensures initial message is passed in, to cost calculator * fix(factory.py): suppress linting errors * fix(cost_calculator.py): strip llm provider from model name after selecting cost calc model * fix(litellm_logging.py): store initial request in 'input' field + accept base_model to be passed in litellm_params directly * test: handle none env var value in flaky test * fix(litellm_logging.py): fix linting errors --------- Co-authored-by: Sam B <samlingx@gmail.com> |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
llm_response_utils | ||
prompt_templates | ||
tokenizers | ||
asyncify.py | ||
core_helpers.py | ||
default_encoding.py | ||
duration_parser.py | ||
exception_mapping_utils.py | ||
get_llm_provider_logic.py | ||
get_supported_openai_params.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_utils.py | ||
mock_functions.py | ||
README.md | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
rules.py | ||
streaming_chunk_builder_utils.py | ||
streaming_handler.py | ||
token_counter.py |
Folder Contents
This folder contains general-purpose utilities that are used in multiple places in the codebase.
Core files:
streaming_handler.py
: The core streaming logic + streaming related helper utilscore_helpers.py
: code used intypes/
- e.g.map_finish_reason
.exception_mapping_utils.py
: utils for mapping exceptions to openai-compatible error types.default_encoding.py
: code for loading the default encoding (tiktoken)get_llm_provider_logic.py
: code for inferring the LLM provider from a given model name.duration_parser.py
: code for parsing durations - e.g. "1d", "1mo", "10s"