litellm-mirror/litellm/litellm_core_utils
Krrish Dholakia fcf4ea3608 build: merge squashed commit
Squashed commit of the following:

commit 6678e15381
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 09:29:15 2025 -0800

    test_prompt_caching

commit bd86e0ac47
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 08:57:16 2025 -0800

    test_prompt_caching

commit 2fc21ad51e
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 08:13:45 2025 -0800

    test_aprompt_caching

commit d94cff55ff
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 08:13:12 2025 -0800

    test_prompt_caching

commit 49c5e7811e
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 07:43:53 2025 -0800

    ui new build

commit cb8d5e5917
Author: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Date:   Wed Feb 26 07:38:56 2025 -0800

    (UI) - Create Key flow for existing users (#8844)

    * working create user button

    * working create user for a key flow

    * allow searching users

    * working create user + key

    * use clear sections on create key

    * better search for users

    * fix create key

    * ui fix create key button - make it neater / cleaner

    * ui fix all keys table

commit 335ba30467
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Wed Feb 26 08:53:17 2025 -0800

    fix: fix file name

commit b8c5b31a4e
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Tue Feb 25 22:54:46 2025 -0800

    fix: fix utils

commit ac6e503461
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Feb 24 10:43:31 2025 -0800

    fix(main.py): fix openai message for assistant msg if role is missing - openai allows this

    Fixes https://github.com/BerriAI/litellm/issues/8661

commit de3989dbc5
Author: Krrish Dholakia <krrishdholakia@gmail.com>
Date:   Mon Feb 24 21:19:25 2025 -0800

    fix(get_litellm_params.py): handle no-log being passed in via kwargs

    Fixes https://github.com/BerriAI/litellm/issues/8380
2025-02-26 09:39:27 -08:00
..
audio_utils (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394) 2024-12-23 22:02:52 -08:00
llm_response_utils fix(router.py): add more deployment timeout debug information for tim… (#8523) 2025-02-13 17:10:22 -08:00
prompt_templates Add anthropic thinking + reasoning content support (#8778) 2025-02-24 21:54:30 -08:00
specialty_caches Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
tokenizers Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
asyncify.py (core sdk fix) - fix fallbacks stuck in infinite loop (#7751) 2025-01-13 19:34:34 -08:00
core_helpers.py fix unused imports 2025-01-02 22:28:22 -08:00
dd_tracing.py (Bug fix) dd-trace used by default on litellm proxy (#8817) 2025-02-25 19:54:22 -08:00
default_encoding.py Code Quality Improvement - remove tokenizers/ from /llms (#7163) 2024-12-10 23:50:15 -08:00
dot_notation_indexing.py feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… (#8037) 2025-01-28 16:27:06 -08:00
duration_parser.py (Bug Fix + Better Observability) - BudgetResetJob: (#8562) 2025-02-15 16:13:08 -08:00
exception_mapping_utils.py Litellm dev 02 13 2025 p2 (#8525) 2025-02-13 20:28:42 -08:00
fallback_utils.py LiteLLM Minor Fixes & Improvements (2024/16/01) (#7826) 2025-01-17 20:59:21 -08:00
get_litellm_params.py build: merge squashed commit 2025-02-26 09:39:27 -08:00
get_llm_provider_logic.py Fix deepseek calling - refactor to use base_llm_http_handler (#8266) 2025-02-04 22:30:00 -08:00
get_model_cost_map.py Doc updates + management endpoint fixes (#8138) 2025-01-30 22:56:41 -08:00
get_supported_openai_params.py fix(utils.py): fix vertex ai optional param handling (#8477) 2025-02-13 19:58:50 -08:00
health_check_utils.py (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455) 2024-12-28 18:38:54 -08:00
initialize_dynamic_callback_params.py Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493) 2024-12-31 23:18:41 -08:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py build: merge squashed commit 2025-02-26 09:39:27 -08:00
llm_request_utils.py Revert "test_completion_mistral_api_mistral_large_function_call" 2025-01-17 07:20:46 -08:00
logging_callback_manager.py (Feat) - Allow viewing Request/Response Logs stored in GCS Bucket (#8449) 2025-02-10 20:38:55 -08:00
logging_utils.py Add datadog health check support + fix bedrock converse cost tracking w/ region name specified (#7958) 2025-01-23 22:17:09 -08:00
mock_functions.py Ensure base_model cost tracking works across all endpoints (#7989) 2025-01-24 21:05:26 -08:00
README.md (QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885) 2024-11-23 16:59:46 -08:00
realtime_streaming.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
redact_messages.py Litellm staging (#8270) 2025-02-04 22:35:48 -08:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
rules.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
safe_json_dumps.py (Bug fix) - Cache Health not working when configured with prometheus service logger (#8687) 2025-02-20 13:41:56 -08:00
sensitive_data_masker.py Litellm dev 02 07 2025 p2 (#8377) 2025-02-07 17:30:38 -08:00
streaming_chunk_builder_utils.py LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 (#7643) 2025-01-08 19:45:19 -08:00
streaming_handler.py Anthropic Citations API Support (#8382) 2025-02-07 22:27:01 -08:00
thread_pool_executor.py (Fixes) OpenAI Streaming Token Counting + Fixes usage track when litellm.turn_off_message_logging=True (#8156) 2025-01-31 15:06:37 -08:00
token_counter.py fix: Support WebP image format and avoid token calculation error (#7182) 2024-12-12 14:32:39 -08:00

Folder Contents

This folder contains general-purpose utilities that are used in multiple places in the codebase.

Core files:

  • streaming_handler.py: The core streaming logic + streaming related helper utils
  • core_helpers.py: code used in types/ - e.g. map_finish_reason.
  • exception_mapping_utils.py: utils for mapping exceptions to openai-compatible error types.
  • default_encoding.py: code for loading the default encoding (tiktoken)
  • get_llm_provider_logic.py: code for inferring the LLM provider from a given model name.
  • duration_parser.py: code for parsing durations - e.g. "1d", "1mo", "10s"