litellm-mirror/litellm/litellm_core_utils
Krish Dholakia 6bafdbc546
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 34s
Litellm dev 01 25 2025 p4 (#8006)
* feat(main.py): use asyncio.sleep for mock_Timeout=true on async request

adds unit testing to ensure proxy does not fail if specific Openai requests hang (e.g. recent o1 outage)

* fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming

Fixes https://github.com/BerriAI/litellm/issues/7942

* Revert "fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming"

This reverts commit 7a052a64e3.

* fix(deepseek-r-1): return reasoning_content as a top-level param

ensures compatibility with existing tools that use it

* fix: fix linting error
2025-01-26 08:01:05 -08:00
..
audio_utils (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
llm_response_utils Deepseek r1 support + watsonx qa improvements (#7907) 2025-01-21 23:13:15 -08:00
prompt_templates Litellm dev 01 13 2025 p2 (#7758) 2025-01-14 17:04:01 -08:00
specialty_caches Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
tokenizers Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
asyncify.py (core sdk fix) - fix fallbacks stuck in infinite loop (#7751) 2025-01-13 19:34:34 -08:00
core_helpers.py fix unused imports 2025-01-02 22:28:22 -08:00
default_encoding.py Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
duration_parser.py (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
exception_mapping_utils.py Litellm dev 01 20 2025 p1 (#7884) 2025-01-20 21:45:48 -08:00
fallback_utils.py LiteLLM Minor Fixes & Improvements (2024/16/01) (#7826) 2025-01-17 20:59:21 -08:00
get_litellm_params.py Ensure base_model cost tracking works across all endpoints (#7989) 2025-01-24 21:05:26 -08:00
get_llm_provider_logic.py LiteLLM Minor Fixes & Improvements (2024/16/01) (#7826) 2025-01-17 20:59:21 -08:00
get_supported_openai_params.py LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 (#7670) 2025-01-10 17:49:05 -08:00
health_check_utils.py (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
initialize_dynamic_callback_params.py Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py Fix custom pricing - separate provider info from model info (#7990) 2025-01-25 21:49:28 -08:00
llm_request_utils.py Revert "test_completion_mistral_api_mistral_large_function_call" 2025-01-17 07:20:46 -08:00
logging_utils.py Add datadog health check support + fix bedrock converse cost tracking w/ region name specified (#7958) 2025-01-23 22:17:09 -08:00
mock_functions.py Ensure base_model cost tracking works across all endpoints (#7989) 2025-01-24 21:05:26 -08:00
README.md (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
realtime_streaming.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
redact_messages.py Litellm dev 01 02 2025 p1 (#7516) 2025-01-03 14:40:57 -08:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
rules.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
streaming_chunk_builder_utils.py LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 (#7643) 2025-01-08 19:45:19 -08:00
streaming_handler.py Litellm dev 01 25 2025 p4 (#8006) 2025-01-26 08:01:05 -08:00
token_counter.py fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00

Folder Contents

This folder contains general-purpose utilities that are used in multiple places in the codebase.

Core files:

  • streaming_handler.py: The core streaming logic + streaming related helper utils
  • core_helpers.py: code used in types/ - e.g. map_finish_reason.
  • exception_mapping_utils.py: utils for mapping exceptions to openai-compatible error types.
  • default_encoding.py: code for loading the default encoding (tiktoken)
  • get_llm_provider_logic.py: code for inferring the LLM provider from a given model name.
  • duration_parser.py: code for parsing durations - e.g. "1d", "1mo", "10s"