litellm-mirror/litellm/litellm_core_utils
Krish Dholakia 4eca6ede4e Litellm dev 11 21 2024 (#6837)
* Fix Vertex AI function calling invoke: use JSON format instead of protobuf text format. (#6702)

* test: test tool_call conversion when arguments is empty dict

Fixes https://github.com/BerriAI/litellm/issues/6833

* fix(openai_like/handler.py): return more descriptive error message

Fixes https://github.com/BerriAI/litellm/issues/6812

* test: skip overloaded model

* docs(anthropic.md): update anthropic docs to show how to route to any new model

* feat(groq/): fake stream when 'response_format' param is passed

Groq doesn't support streaming when response_format is set

* feat(groq/): add response_format support for groq

Closes https://github.com/BerriAI/litellm/issues/6845

* fix(o1_handler.py): remove fake streaming for o1

Closes https://github.com/BerriAI/litellm/issues/6801

* build(model_prices_and_context_window.json): add groq llama3.2b model pricing

Closes https://github.com/BerriAI/litellm/issues/6807

* fix(utils.py): fix handling ollama response format param

Fixes https://github.com/BerriAI/litellm/issues/6848#issuecomment-2491215485

* docs(sidebars.js): refactor chat endpoint placement

* fix: fix linting errors

* test: fix test

* test: fix test

* fix(openai_like/handler): handle max retries

* fix(streaming_handler.py): fix streaming check for openai-compatible providers

* test: update test

* test: correctly handle model is overloaded error

* test: update test

* test: fix test

* test: mark flaky test

---------

Co-authored-by: Guowang Li <Guowang@users.noreply.github.com>
2024-11-22 01:53:52 +05:30
..
audio_utils fix import error 2024-09-05 10:09:44 -07:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (10/09/2024) (#6139) 2024-10-10 00:42:11 -07:00
llm_response_utils (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546) 2024-11-04 15:47:48 -08:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
default_encoding.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
exception_mapping_utils.py [Feature]: Stop swallowing up AzureOpenAi exception responses in litellm's implementation for a BadRequestError (#6745) 2024-11-14 15:54:28 -08:00
get_llm_provider_logic.py chore: comment for maritalk (#6607) 2024-11-07 12:20:12 -08:00
get_supported_openai_params.py LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) 2024-11-15 11:18:31 +05:30
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py LiteLLM Minor Fixes & Improvements (11/13/2024) (#6729) 2024-11-15 11:18:31 +05:30
llm_request_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
logging_utils.py (refactor) use helper function _assemble_complete_response_from_streaming_chunks to assemble complete responses in caching and logging callbacks (#6220) 2024-10-15 12:45:12 +05:30
mock_functions.py test(router_code_coverage.py): check if all router functions are dire… (#6186) 2024-10-14 22:44:00 -07:00
README.md Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
realtime_streaming.py Litellm dev 10 22 2024 (#6384) 2024-10-22 21:18:54 -07:00
redact_messages.py LiteLLM Minor Fixes & Improvements (10/04/2024) (#6064) 2024-10-04 21:28:53 -04:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
rules.py Litellm dev 11 07 2024 (#6649) 2024-11-08 19:34:22 +05:30
streaming_chunk_builder_utils.py LiteLLM Minor Fixes & Improvements (11/05/2024) (#6590) 2024-11-07 04:17:05 +05:30
streaming_handler.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
token_counter.py fix(token_counter.py): New `get_modified_max_tokens' helper func 2024-06-27 15:38:09 -07:00

Folder Contents

This folder contains general-purpose utilities that are used in multiple places in the codebase.

Core files:

  • streaming_handler.py: The core streaming logic + streaming related helper utils
  • core_helpers.py: code used in types/ - e.g. map_finish_reason.
  • exception_mapping_utils.py: utils for mapping exceptions to openai-compatible error types.
  • default_encoding.py: code for loading the default encoding (tiktoken)
  • get_llm_provider_logic.py: code for inferring the LLM provider from a given model name.