.. |
audio_utils
|
fix import error
|
2024-09-05 10:09:44 -07:00 |
llm_cost_calc
|
LiteLLM Minor Fixes & Improvements (10/09/2024) (#6139)
|
2024-10-10 00:42:11 -07:00 |
llm_response_utils
|
(fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
|
2024-11-04 15:47:48 -08:00 |
prompt_templates
|
(Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164)
|
2024-12-11 00:02:46 -08:00 |
tokenizers
|
Code Quality Improvement - remove tokenizers/ from /llms (#7163)
|
2024-12-10 23:50:15 -08:00 |
asyncify.py
|
build(config.yml): bump anyio version
|
2024-08-27 07:37:06 -07:00 |
core_helpers.py
|
Litellm dev 11 07 2024 (#6649)
|
2024-11-08 19:34:22 +05:30 |
default_encoding.py
|
Code Quality Improvement - remove tokenizers/ from /llms (#7163)
|
2024-12-10 23:50:15 -08:00 |
duration_parser.py
|
(QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885)
|
2024-11-23 16:59:46 -08:00 |
exception_mapping_utils.py
|
Litellm 12 02 2024 (#6994)
|
2024-12-02 22:00:01 -08:00 |
get_llm_provider_logic.py
|
Litellm merge pr (#7161)
|
2024-12-10 22:49:26 -08:00 |
get_supported_openai_params.py
|
fix(get_supported_openai_params.py): cleanup (#7176)
|
2024-12-11 01:15:53 -08:00 |
json_validation_rule.py
|
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
|
2024-07-18 16:57:38 -07:00 |
litellm_logging.py
|
feat(langfuse/): support langfuse prompt management (#7073)
|
2024-12-06 23:10:22 -08:00 |
llm_request_utils.py
|
Litellm ruff linting enforcement (#5992)
|
2024-10-01 19:44:20 -04:00 |
logging_utils.py
|
(refactor) use helper function _assemble_complete_response_from_streaming_chunks to assemble complete responses in caching and logging callbacks (#6220)
|
2024-10-15 12:45:12 +05:30 |
mock_functions.py
|
test(router_code_coverage.py): check if all router functions are dire… (#6186)
|
2024-10-14 22:44:00 -07:00 |
README.md
|
(QOL improvement) Provider budget routing - allow using 1s, 1d, 1mo, 2mo etc (#6885)
|
2024-11-23 16:59:46 -08:00 |
realtime_streaming.py
|
Litellm dev 10 22 2024 (#6384)
|
2024-10-22 21:18:54 -07:00 |
redact_messages.py
|
(feat) Allow enabling logging message / response for specific virtual keys (#7071)
|
2024-12-06 21:25:36 -08:00 |
response_header_helpers.py
|
fix(utils.py): guarantee openai-compatible headers always exist in response
|
2024-09-28 21:08:15 -07:00 |
rules.py
|
Litellm dev 11 07 2024 (#6649)
|
2024-11-08 19:34:22 +05:30 |
streaming_chunk_builder_utils.py
|
LiteLLM Minor Fixes & Improvements (12/05/2024) (#7051)
|
2024-12-06 14:29:53 -08:00 |
streaming_handler.py
|
(Refactor) Code Quality improvement - rename text_completion_codestral.py -> codestral/completion/ (#7172)
|
2024-12-11 00:55:47 -08:00 |
token_counter.py
|
fix(token_counter.py): New `get_modified_max_tokens' helper func
|
2024-06-27 15:38:09 -07:00 |