..
audio_utils
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks ( #7455 )
2024-12-28 18:38:54 -08:00
llm_cost_calc
LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 ( #7394 )
2024-12-23 22:02:52 -08:00
llm_response_utils
Litellm dev bedrock anthropic 3 7 v2 ( #8843 )
2025-02-26 16:05:33 -08:00
prompt_templates
Litellm dev bedrock anthropic 3 7 v2 ( #8843 )
2025-02-26 16:05:33 -08:00
specialty_caches
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter
endpoint ( #7493 )
2024-12-31 23:18:41 -08:00
tokenizers
Code Quality Improvement - remove tokenizers/
from /llms ( #7163 )
2024-12-10 23:50:15 -08:00
asyncify.py
(core sdk fix) - fix fallbacks stuck in infinite loop ( #7751 )
2025-01-13 19:34:34 -08:00
core_helpers.py
fix unused imports
2025-01-02 22:28:22 -08:00
dd_tracing.py
(Bug fix) - don't log messages in model_parameters
in StandardLoggingPayload ( #8932 )
2025-03-01 13:39:45 -08:00
default_encoding.py
Code Quality Improvement - remove tokenizers/
from /llms ( #7163 )
2024-12-10 23:50:15 -08:00
dot_notation_indexing.py
feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… ( #8037 )
2025-01-28 16:27:06 -08:00
duration_parser.py
(Bug Fix + Better Observability) - BudgetResetJob: ( #8562 )
2025-02-15 16:13:08 -08:00
exception_mapping_utils.py
Fix bedrock passing response_format: {"type": "text"}
( #8900 )
2025-02-28 20:09:59 -08:00
fallback_utils.py
LiteLLM Minor Fixes & Improvements (2024/16/01) ( #7826 )
2025-01-17 20:59:21 -08:00
get_litellm_params.py
build: merge squashed commit
2025-02-26 09:39:27 -08:00
get_llm_provider_logic.py
Fix deepseek calling - refactor to use base_llm_http_handler ( #8266 )
2025-02-04 22:30:00 -08:00
get_model_cost_map.py
Doc updates + management endpoint fixes ( #8138 )
2025-01-30 22:56:41 -08:00
get_supported_openai_params.py
fix(utils.py): fix vertex ai optional param handling ( #8477 )
2025-02-13 19:58:50 -08:00
health_check_utils.py
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks ( #7455 )
2024-12-28 18:38:54 -08:00
initialize_dynamic_callback_params.py
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter
endpoint ( #7493 )
2024-12-31 23:18:41 -08:00
json_validation_rule.py
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
2024-07-18 16:57:38 -07:00
litellm_logging.py
(Bug fix) - don't log messages in model_parameters
in StandardLoggingPayload ( #8932 )
2025-03-01 13:39:45 -08:00
llm_request_utils.py
Revert "test_completion_mistral_api_mistral_large_function_call"
2025-01-17 07:20:46 -08:00
logging_callback_manager.py
(Feat) - Allow viewing Request/Response Logs stored in GCS Bucket ( #8449 )
2025-02-10 20:38:55 -08:00
logging_utils.py
Add datadog health check support + fix bedrock converse cost tracking w/ region name specified ( #7958 )
2025-01-23 22:17:09 -08:00
mock_functions.py
Ensure base_model cost tracking works across all endpoints ( #7989 )
2025-01-24 21:05:26 -08:00
model_param_helper.py
(Bug fix) - don't log messages in model_parameters
in StandardLoggingPayload ( #8932 )
2025-03-01 13:39:45 -08:00
README.md
(QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc ( #6885 )
2024-11-23 16:59:46 -08:00
realtime_streaming.py
(code quality) run ruff rule to ban unused imports ( #7313 )
2024-12-19 12:33:42 -08:00
redact_messages.py
Litellm staging ( #8270 )
2025-02-04 22:35:48 -08:00
response_header_helpers.py
fix(utils.py): guarantee openai-compatible headers always exist in response
2024-09-28 21:08:15 -07:00
rules.py
Litellm dev 11 07 2024 ( #6649 )
2024-11-08 19:34:22 +05:30
safe_json_dumps.py
(Bug fix) - Cache Health not working when configured with prometheus service logger ( #8687 )
2025-02-20 13:41:56 -08:00
sensitive_data_masker.py
Litellm dev 02 07 2025 p2 ( #8377 )
2025-02-07 17:30:38 -08:00
streaming_chunk_builder_utils.py
LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 ( #7643 )
2025-01-08 19:45:19 -08:00
streaming_handler.py
Fix deepseek 'reasoning_content' error ( #8963 )
2025-03-03 14:34:10 -08:00
thread_pool_executor.py
(Fixes) OpenAI Streaming Token Counting + Fixes usage track when litellm.turn_off_message_logging=True
( #8156 )
2025-01-31 15:06:37 -08:00
token_counter.py
fix: Support WebP image format and avoid token calculation error ( #7182 )
2024-12-12 14:32:39 -08:00