litellm/litellm/litellm_core_utils
Krish Dholakia 1cd1d23fdf
LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407)
* docs(bedrock.md): clarify bedrock auth in litellm docs

* fix(convert_dict_to_response.py): Fixes https://github.com/BerriAI/litellm/issues/6387

* feat(pattern_match_deployments.py): more robust handling for wildcard routes (model_name: custom_route/* -> openai/*)

Enables user to expose custom routes to users with dynamic handling

* test: add more testing

* docs(custom_pricing.md): add debug tutorial for custom pricing

* test: skip codestral test - unreachable backend

* test: fix test

* fix(pattern_matching_deployments.py): fix typing

* test: cleanup codestral tests - backend api unavailable

* (refactor) prometheus async_log_success_event to be under 100 LOC  (#6416)

* unit testig for prometheus

* unit testing for success metrics

* use 1 helper for _increment_token_metrics

* use helper for _increment_remaining_budget_metrics

* use _increment_remaining_budget_metrics

* use _increment_top_level_request_and_spend_metrics

* use helper for _set_latency_metrics

* remove noqa violation

* fix test prometheus

* test prometheus

* unit testing for all prometheus helper functions

* fix prom unit tests

* fix unit tests prometheus

* fix unit test prom

* (refactor) router - use static methods for client init utils  (#6420)

* use InitalizeOpenAISDKClient

* use InitalizeOpenAISDKClient static method

* fix  # noqa: PLR0915

* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend  (#6406)

* code cleanup remove unused and undocumented code files

* fix unused logging integrations cleanup

* bump: version 1.50.3 → 1.50.4

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-24 19:01:41 -07:00
..
audio_utils fix import error 2024-09-05 10:09:44 -07:00
llm_cost_calc LiteLLM Minor Fixes & Improvements (10/09/2024) (#6139) 2024-10-10 00:42:11 -07:00
llm_response_utils LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407) 2024-10-24 19:01:41 -07:00
asyncify.py build(config.yml): bump anyio version 2024-08-27 07:37:06 -07:00
core_helpers.py [Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL (#5881) 2024-09-25 10:57:08 -07:00
exception_mapping_utils.py (code quality) add ruff check PLR0915 for too-many-statements (#6309) 2024-10-18 15:36:49 +05:30
get_llm_provider_logic.py feat(proxy_cli.py): add new 'log_config' cli param (#6352) 2024-10-21 21:25:58 -07:00
json_validation_rule.py feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls 2024-07-18 16:57:38 -07:00
litellm_logging.py feat(litellm_logging.py): refactor standard_logging_payload function … (#6388) 2024-10-24 18:59:01 -07:00
llm_request_utils.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
logging_utils.py (refactor) use helper function _assemble_complete_response_from_streaming_chunks to assemble complete responses in caching and logging callbacks (#6220) 2024-10-15 12:45:12 +05:30
mock_functions.py test(router_code_coverage.py): check if all router functions are dire… (#6186) 2024-10-14 22:44:00 -07:00
realtime_streaming.py Litellm dev 10 22 2024 (#6384) 2024-10-22 21:18:54 -07:00
redact_messages.py LiteLLM Minor Fixes & Improvements (10/04/2024) (#6064) 2024-10-04 21:28:53 -04:00
response_header_helpers.py fix(utils.py): guarantee openai-compatible headers always exist in response 2024-09-28 21:08:15 -07:00
streaming_chunk_builder_utils.py Litellm openai audio streaming (#6325) 2024-10-19 16:16:51 -07:00
streaming_utils.py fix(streaming_utils.py): fix generic_chunk_has_all_required_fields 2024-08-26 21:13:02 -07:00
token_counter.py fix(token_counter.py): New `get_modified_max_tokens' helper func 2024-06-27 15:38:09 -07:00