mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
* anthropic_messages_handler v0 * fix /messages * working messages with router methods * test_anthropic_messages_handler_litellm_router_non_streaming * test_anthropic_messages_litellm_router_non_streaming_with_logging * AnthropicMessagesConfig * _handle_anthropic_messages_response_logging * working with /v1/messages endpoint * working /v1/messages endpoint * refactor to use router factory function * use aanthropic_messages * use BaseConfig for Anthropic /v1/messages * track api key, team on /v1/messages endpoint * fix get_logging_payload * BaseAnthropicMessagesTest * align test config * test_anthropic_messages_with_thinking * test_anthropic_streaming_with_thinking * fix - display anthropic url for debugging * test_bad_request_error_handling * test_anthropic_messages_router_streaming_with_bad_request * fix ProxyException * test_bad_request_error_handling_streaming * use provider_specific_header * test_anthropic_messages_with_extra_headers * test_anthropic_messages_to_wildcard_model * fix gcs pub sub test * standard_logging_payload * fix unit testing for anthopic /v1/messages support * fix pass through anthropic messages api * delete dead code * fix anthropic pass through response * revert change to spend tracking utils * fix get_litellm_metadata_from_kwargs * fix spend logs payload json * proxy_pass_through_endpoint_tests * TestAnthropicPassthroughBasic * fix pass through tests * test_async_vertex_proxy_route_api_key_auth * _handle_anthropic_messages_response_logging * vertex_credentials * test_set_default_vertex_config * test_anthropic_messages_litellm_router_non_streaming_with_logging * test_ageneric_api_call_with_fallbacks_basic * test__aadapter_completion |
||
---|---|---|
.. | ||
audio_utils | ||
llm_cost_calc | ||
llm_response_utils | ||
prompt_templates | ||
specialty_caches | ||
tokenizers | ||
asyncify.py | ||
core_helpers.py | ||
dd_tracing.py | ||
default_encoding.py | ||
dot_notation_indexing.py | ||
duration_parser.py | ||
exception_mapping_utils.py | ||
fallback_utils.py | ||
get_litellm_params.py | ||
get_llm_provider_logic.py | ||
get_model_cost_map.py | ||
get_supported_openai_params.py | ||
health_check_utils.py | ||
initialize_dynamic_callback_params.py | ||
json_validation_rule.py | ||
litellm_logging.py | ||
llm_request_utils.py | ||
logging_callback_manager.py | ||
logging_utils.py | ||
mock_functions.py | ||
model_param_helper.py | ||
README.md | ||
realtime_streaming.py | ||
redact_messages.py | ||
response_header_helpers.py | ||
rules.py | ||
safe_json_dumps.py | ||
sensitive_data_masker.py | ||
streaming_chunk_builder_utils.py | ||
streaming_handler.py | ||
thread_pool_executor.py | ||
token_counter.py |
Folder Contents
This folder contains general-purpose utilities that are used in multiple places in the codebase.
Core files:
streaming_handler.py
: The core streaming logic + streaming related helper utilscore_helpers.py
: code used intypes/
- e.g.map_finish_reason
.exception_mapping_utils.py
: utils for mapping exceptions to openai-compatible error types.default_encoding.py
: code for loading the default encoding (tiktoken)get_llm_provider_logic.py
: code for inferring the LLM provider from a given model name.duration_parser.py
: code for parsing durations - e.g. "1d", "1mo", "10s"