litellm-mirror/litellm/litellm_core_utils
Krish Dholakia 35919d9fec
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 12s
Litellm dev 01 13 2025 p2 (#7758)
* fix(factory.py): fix bedrock document url check

Make check more generic - if starts with 'text' or 'application' assume it's a document and let it go through

 Fixes https://github.com/BerriAI/litellm/issues/7746

* feat(key_management_endpoints.py): support writing new key alias to aws secret manager - on key rotation

adds rotation endpoint to aws key management hook - allows for rotated litellm virtual keys with new key alias to be written to it

* feat(key_management_event_hooks.py): support rotating keys and updating secret manager

* refactor(base_secret_manager.py): support rotate secret at the base level

since it's just an abstraction function, it's easy to implement at the base manager level

* style: cleanup unused imports
2025-01-14 17:04:01 -08:00
..
audio_utils (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
llm_response_utils LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263) 2024-12-17 15:33:36 -08:00
prompt_templates Litellm dev 01 13 2025 p2 (#7758) 2025-01-14 17:04:01 -08:00
specialty_caches Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
tokenizers Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
asyncify.py (core sdk fix) - fix fallbacks stuck in infinite loop (#7751) 2025-01-13 19:34:34 -08:00
core_helpers.py fix unused imports 2025-01-02 22:28:22 -08:00
default_encoding.py Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
duration_parser.py (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
exception_mapping_utils.py Litellm dev 12 30 2024 p2 (#7495) 2025-01-01 18:57:29 -08:00
fallback_utils.py (core sdk fix) - fix fallbacks stuck in infinite loop (#7751) 2025-01-13 19:34:34 -08:00
get_llm_provider_logic.py [BETA] Add OpenAI /images/variations + Topaz API support (#7700) 2025-01-11 23:27:46 -08:00
get_supported_openai_params.py LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 (#7670) 2025-01-10 17:49:05 -08:00
health_check_utils.py (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
initialize_dynamic_callback_params.py Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py (litellm SDK perf improvements) - handle cases when unable to lookup model in model cost map (#7750) 2025-01-13 19:58:46 -08:00
llm_request_utils.py Litellm dev 01 11 2025 p3 (#7702) 2025-01-11 20:06:54 -08:00
logging_utils.py Complete 'requests' library removal (#7350) 2024-12-22 07:21:25 -08:00
mock_functions.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
README.md (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
realtime_streaming.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
redact_messages.py Litellm dev 01 02 2025 p1 (#7516) 2025-01-03 14:40:57 -08:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
rules.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
streaming_chunk_builder_utils.py LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 (#7643) 2025-01-08 19:45:19 -08:00
streaming_handler.py fix(main.py): fix lm_studio/ embedding routing (#7658) 2025-01-09 23:03:24 -08:00
token_counter.py fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00

Folder Contents

This folder contains general-purpose utilities that are used in multiple places in the codebase.

Core files:

  • streaming_handler.py: The core streaming logic + streaming related helper utils
  • core_helpers.py: code used in types/ - e.g. map_finish_reason.
  • exception_mapping_utils.py: utils for mapping exceptions to openai-compatible error types.
  • default_encoding.py: code for loading the default encoding (tiktoken)
  • get_llm_provider_logic.py: code for inferring the LLM provider from a given model name.
  • duration_parser.py: code for parsing durations - e.g. "1d", "1mo", "10s"