LiteLLM Minor Fixes & Improvements (10/04/2024) (#6064)

* fix(litellm_logging.py): ensure cache hits are scrubbed if 'turn_off_message_logging' is enabled

* fix(sagemaker.py): fix streaming to raise error immediately

Fixes https://github.com/BerriAI/litellm/issues/6054

* (fixes)  gcs bucket key based logging  (#6044)

* fixes for gcs bucket logging

* fix StandardCallbackDynamicParams

* fix - gcs logging when payload is not serializable

* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket

* working success callbacks

* linting fixes

* fix linting error

* add type hints to functions

* fixes for dynamic success and failure logging

* fix for test_async_chat_openai_stream

* fix handle case when key based logging vars are set as os.environ/ vars

* fix prometheus track cooldown events on custom logger (#6060)

* (docs) add 1k rps load test doc  (#6059)

* docs 1k rps load test

* docs load testing

* docs load testing litellm

* docs load testing

* clean up load test doc

* docs prom metrics for load testing

* docs using prometheus on load testing

* doc load testing with prometheus

* (fixes) docs + qa - gcs key based logging  (#6061)

* fixes for required values for gcs bucket

* docs gcs bucket logging

* bump: version 1.48.12 → 1.48.13

* ci/cd run again

* bump: version 1.48.13 → 1.48.14

* update load test doc

* (docs) router settings - on litellm config  (#6037)

* add yaml with all router settings

* add docs for router settings

* docs router settings litellm settings

* (feat)  OpenAI prompt caching models to model cost map (#6063)

* add prompt caching for latest models

* add cache_read_input_token_cost for prompt caching models

* fix(litellm_logging.py): check if param is iterable

Fixes https://github.com/BerriAI/litellm/issues/6025#issuecomment-2393929946

* fix(factory.py): support passing an 'assistant_continue_message' to prevent bedrock error

Fixes https://github.com/BerriAI/litellm/issues/6053

* fix(databricks/chat): handle streaming responses

* fix(factory.py): fix linting error

* fix(utils.py): unify anthropic + deepseek prompt caching information to openai format

Fixes https://github.com/BerriAI/litellm/issues/6069

* test: fix test

* fix(types/utils.py): support all openai roles

Fixes https://github.com/BerriAI/litellm/issues/6052

* test: fix test

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
This commit is contained in:
Krish Dholakia 2024-10-04 21:28:53 -04:00 committed by GitHub
parent fc6e0dd6cb
commit 2e5c46ef6d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
19 changed files with 1034 additions and 259 deletions

View file

@ -1258,6 +1258,7 @@ def test_standard_logging_payload(model, turn_off_message_logging):
"standard_logging_object"
]
if turn_off_message_logging:
print("checks redacted-by-litellm")
assert "redacted-by-litellm" == slobject["messages"][0]["content"]
assert "redacted-by-litellm" == slobject["response"]
@ -1307,9 +1308,15 @@ def test_aaastandard_logging_payload_cache_hit():
assert standard_logging_object["saved_cache_cost"] > 0
def test_logging_async_cache_hit_sync_call():
@pytest.mark.parametrize(
"turn_off_message_logging",
[False, True],
) # False
def test_logging_async_cache_hit_sync_call(turn_off_message_logging):
from litellm.types.utils import StandardLoggingPayload
litellm.turn_off_message_logging = turn_off_message_logging
litellm.cache = Cache()
response = litellm.completion(
@ -1356,6 +1363,14 @@ def test_logging_async_cache_hit_sync_call():
assert standard_logging_object["response_cost"] == 0
assert standard_logging_object["saved_cache_cost"] > 0
if turn_off_message_logging:
print("checks redacted-by-litellm")
assert (
"redacted-by-litellm"
== standard_logging_object["messages"][0]["content"]
)
assert "redacted-by-litellm" == standard_logging_object["response"]
def test_logging_standard_payload_failure_call():
from litellm.types.utils import StandardLoggingPayload