..
audio_utils
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks ( #7455 )
2024-12-28 18:38:54 -08:00
llm_cost_calc
Add bedrock latency optimized inference support ( #9623 )
2025-03-29 00:23:09 -07:00
llm_response_utils
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
prompt_templates
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
specialty_caches
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter
endpoint ( #7493 )
2024-12-31 23:18:41 -08:00
tokenizers
Code Quality Improvement - remove tokenizers/
from /llms ( #7163 )
2024-12-10 23:50:15 -08:00
asyncify.py
(core sdk fix) - fix fallbacks stuck in infinite loop ( #7751 )
2025-01-13 19:34:34 -08:00
core_helpers.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
credential_accessor.py
feat(credential_accessor.py): fix upserting new credentials via accessor
2025-03-12 19:03:37 -07:00
dd_tracing.py
(Bug fix) - don't log messages in model_parameters
in StandardLoggingPayload ( #8932 )
2025-03-01 13:39:45 -08:00
default_encoding.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
dot_notation_indexing.py
feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… ( #8037 )
2025-01-28 16:27:06 -08:00
duration_parser.py
(Bug Fix + Better Observability) - BudgetResetJob: ( #8562 )
2025-02-15 16:13:08 -08:00
exception_mapping_utils.py
exception_type
2025-03-13 20:09:32 -07:00
fallback_utils.py
LiteLLM Minor Fixes & Improvements (2024/16/01) ( #7826 )
2025-01-17 20:59:21 -08:00
get_litellm_params.py
Merge branch 'main' into litellm_dev_03_12_2025_p1
2025-03-12 22:14:02 -07:00
get_llm_provider_logic.py
fix(get_llm_provider.py): Fixes https://github.com/BerriAI/litellm/issues/9291
2025-03-18 10:48:28 -07:00
get_model_cost_map.py
Doc updates + management endpoint fixes ( #8138 )
2025-01-30 22:56:41 -08:00
get_supported_openai_params.py
Add OpenAI gpt-4o-transcribe support ( #9517 )
2025-03-26 23:10:25 -07:00
health_check_utils.py
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks ( #7455 )
2024-12-28 18:38:54 -08:00
initialize_dynamic_callback_params.py
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter
endpoint ( #7493 )
2024-12-31 23:18:41 -08:00
json_validation_rule.py
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
2024-07-18 16:57:38 -07:00
litellm_logging.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
llm_request_utils.py
Revert "test_completion_mistral_api_mistral_large_function_call"
2025-01-17 07:20:46 -08:00
logging_callback_manager.py
get_custom_loggers_for_type
2025-03-19 16:36:41 -07:00
logging_utils.py
fix(streaming_handler.py): emit deep copy of completed chunk
2025-03-17 17:26:21 -07:00
mock_functions.py
Ensure base_model cost tracking works across all endpoints ( #7989 )
2025-01-24 21:05:26 -08:00
model_param_helper.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
README.md
(QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc ( #6885 )
2024-11-23 16:59:46 -08:00
realtime_streaming.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
redact_messages.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
response_header_helpers.py
fix(utils.py): guarantee openai-compatible headers always exist in response
2024-09-28 21:08:15 -07:00
rules.py
Litellm dev 11 07 2024 ( #6649 )
2024-11-08 19:34:22 +05:30
safe_json_dumps.py
Add recursion depth to convert_anyof_null_to_nullable, constants.py. Fix recursive_detector.py raise error state
2025-03-28 13:11:19 -07:00
sensitive_data_masker.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
streaming_chunk_builder_utils.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
streaming_handler.py
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
2025-03-29 11:02:13 -07:00
thread_pool_executor.py
(Fixes) OpenAI Streaming Token Counting + Fixes usage track when litellm.turn_off_message_logging=True
( #8156 )
2025-01-31 15:06:37 -08:00
token_counter.py
fix: Support WebP image format and avoid token calculation error ( #7182 )
2024-12-12 14:32:39 -08:00