mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
* fix(litellm_proxy/chat/transformation.py): support 'thinking' param Fixes https://github.com/BerriAI/litellm/issues/9380 * feat(azure/gpt_transformation.py): add azure audio model support Closes https://github.com/BerriAI/litellm/issues/6305 * fix(utils.py): use provider_config in common functions * fix(utils.py): add missing provider configs to get_chat_provider_config * test: fix test * fix: fix path * feat(utils.py): make bedrock invoke nova config baseconfig compatible * fix: fix linting errors * fix(azure_ai/transformation.py): remove buggy optional param filtering for azure ai Removes incorrect check for support tool choice when calling azure ai - prevented calling models with response_format unless on litell model cost map * fix(amazon_cohere_transformation.py): fix bedrock invoke cohere transformation to inherit from coherechatconfig * test: fix azure ai tool choice mapping * fix: fix model cost map to add 'supports_tool_choice' to cohere models * fix(get_supported_openai_params.py): check if custom llm provider in llm providers * fix(get_supported_openai_params.py): fix llm provider in list check * fix: fix ruff check errors * fix: support defs when calling bedrock nova * fix(factory.py): fix test |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
llm_response_utils | ||
prompt_templates | ||
specialty_caches | ||
tokenizers | ||
asyncify.py | ||
core_helpers.py | ||
credential_accessor.py | ||
dd_tracing.py | ||
default_encoding.py | ||
dot_notation_indexing.py | ||
duration_parser.py | ||
exception_mapping_utils.py | ||
fallback_utils.py | ||
get_litellm_params.py | ||
get_llm_provider_logic.py | ||
get_model_cost_map.py | ||
get_supported_openai_params.py | ||
health_check_utils.py | ||
initialize_dynamic_callback_params.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_callback_manager.py | ||
logging_utils.py | ||
mock_functions.py | ||
model_param_helper.py | ||
README.md | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
rules.py | ||
safe_json_dumps.py | ||
sensitive_data_masker.py | ||
streaming_chunk_builder_utils.py | ||
streaming_handler.py | ||
thread_pool_executor.py | ||
token_counter.py |
Folder Contents
This folder contains general-purpose utilities that are used in multiple places in the codebase.
Core files:
streaming_handler.py
: The core streaming logic + streaming related helper utilscore_helpers.py
: code used intypes/
- e.g.map_finish_reason
.exception_mapping_utils.py
: utils for mapping exceptions to openai-compatible error types.default_encoding.py
: code for loading the default encoding (tiktoken)get_llm_provider_logic.py
: code for inferring the LLM provider from a given model name.duration_parser.py
: code for parsing durations - e.g. "1d", "1mo", "10s"