mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 03:34:10 +00:00
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini process url * refactor(router.py): refactor '_prompt_management_factory' to use logging obj get_chat_completion logic deduplicates code * fix(litellm_logging.py): update 'get_chat_completion_prompt' to update logging object messages * docs(prompt_management.md): update prompt management to be in beta given feedback - this still needs to be revised (e.g. passing in user message, not ignoring) * refactor(prompt_management_base.py): introduce base class for prompt management allows consistent behaviour across prompt management integrations * feat(prompt_management_base.py): support adding client message to template message + refactor langfuse prompt management to use prompt management base * fix(litellm_logging.py): log prompt id + prompt variables to langfuse if set allows tracking what prompt was used for what purpose * feat(litellm_logging.py): log prompt management metadata in standard logging payload + use in langfuse allows logging prompt id / prompt variables to langfuse * test: fix test * fix(router.py): cleanup unused imports * fix: fix linting error * fix: fix trace param typing * fix: fix linting errors * fix: fix code qa check |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
llm_response_utils | ||
prompt_templates | ||
specialty_caches | ||
tokenizers | ||
asyncify.py | ||
core_helpers.py | ||
default_encoding.py | ||
duration_parser.py | ||
exception_mapping_utils.py | ||
get_llm_provider_logic.py | ||
get_supported_openai_params.py | ||
health_check_utils.py | ||
initialize_dynamic_callback_params.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_utils.py | ||
mock_functions.py | ||
README.md | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
rules.py | ||
streaming_chunk_builder_utils.py | ||
streaming_handler.py | ||
token_counter.py |
Folder Contents
This folder contains general-purpose utilities that are used in multiple places in the codebase.
Core files:
streaming_handler.py
: The core streaming logic + streaming related helper utilscore_helpers.py
: code used intypes/
- e.g.map_finish_reason
.exception_mapping_utils.py
: utils for mapping exceptions to openai-compatible error types.default_encoding.py
: code for loading the default encoding (tiktoken)get_llm_provider_logic.py
: code for inferring the LLM provider from a given model name.duration_parser.py
: code for parsing durations - e.g. "1d", "1mo", "10s"