.. |
audio_utils
|
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455)
|
2024-12-28 18:38:54 -08:00 |
llm_cost_calc
|
LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 (#7394)
|
2024-12-23 22:02:52 -08:00 |
llm_response_utils
|
LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 (#7263)
|
2024-12-17 15:33:36 -08:00 |
prompt_templates
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
specialty_caches
|
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493)
|
2024-12-31 23:18:41 -08:00 |
tokenizers
|
Code Quality Improvement - remove tokenizers/ from /llms (#7163)
|
2024-12-10 23:50:15 -08:00 |
asyncify.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
core_helpers.py
|
fix unused imports
|
2025-01-02 22:28:22 -08:00 |
default_encoding.py
|
Code Quality Improvement - remove tokenizers/ from /llms (#7163)
|
2024-12-10 23:50:15 -08:00 |
duration_parser.py
|
(QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885)
|
2024-11-23 16:59:46 -08:00 |
exception_mapping_utils.py
|
Litellm dev 12 30 2024 p2 (#7495)
|
2025-01-01 18:57:29 -08:00 |
get_llm_provider_logic.py
|
(litellm sdk - perf improvement) - use O(1) set lookups for checking llm providers / models (#7672)
|
2025-01-10 14:16:30 -08:00 |
get_supported_openai_params.py
|
LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 (#7670)
|
2025-01-10 17:49:05 -08:00 |
health_check_utils.py
|
(Refactor) - Re use litellm.completion/litellm.embedding etc for health checks (#7455)
|
2024-12-28 18:38:54 -08:00 |
initialize_dynamic_callback_params.py
|
Fix team-based logging to langfuse + allow custom tokenizer on /token_counter endpoint (#7493)
|
2024-12-31 23:18:41 -08:00 |
json_validation_rule.py
|
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
|
2024-07-18 16:57:38 -07:00 |
litellm_logging.py
|
(performance improvement - litellm sdk + proxy) - ensure litellm does not create unnecessary threads when running async functions (#7680)
|
2025-01-10 17:57:22 -08:00 |
llm_request_utils.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
logging_utils.py
|
Complete 'requests' library removal (#7350)
|
2024-12-22 07:21:25 -08:00 |
mock_functions.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
README.md
|
(QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885)
|
2024-11-23 16:59:46 -08:00 |
realtime_streaming.py
|
(code quality) run ruff rule to ban unused imports (#7313)
|
2024-12-19 12:33:42 -08:00 |
redact_messages.py
|
Litellm dev 01 02 2025 p1 (#7516)
|
2025-01-03 14:40:57 -08:00 |
response_header_helpers.py
|
fix(utils.py): guarantee openai-compatible headers always exist in response
|
2024-09-28 21:08:15 -07:00 |
rules.py
|
Litellm dev 11 07 2024 (#6649)
|
2024-11-08 19:34:22 +05:30 |
streaming_chunk_builder_utils.py
|
LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 (#7643)
|
2025-01-08 19:45:19 -08:00 |
streaming_handler.py
|
fix(main.py): fix lm_studio/ embedding routing (#7658)
|
2025-01-09 23:03:24 -08:00 |
token_counter.py
|
fix: Support WebP image format and avoid token calculation error (#7182)
|
2024-12-12 14:32:39 -08:00 |