LiteLLM Minor Fixes and Improvements (09/07/2024) (#5580)

* fix(litellm_logging.py): set completion_start_time_float to end_time_float if none

Fixes https://github.com/BerriAI/litellm/issues/5500

* feat(_init_.py): add new 'openai_text_completion_compatible_providers' list

Fixes https://github.com/BerriAI/litellm/issues/5558

Handles correctly routing fireworks ai calls when done via text completions

* fix: fix linting errors

* fix: fix linting errors

* fix(openai.py): fix exception raised

* fix(openai.py): fix error handling

* fix(_redis.py): allow all supported arguments for redis cluster (#5554)

* Revert "fix(_redis.py): allow all supported arguments for redis cluster (#5554)" (#5583)

This reverts commit f2191ef4cb.

* fix(router.py): return model alias w/ underlying deployment on router.get_model_list()

Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666

* test: handle flaky tests

---------

Co-authored-by: Jonas Dittrich <58814480+Kakadus@users.noreply.github.com>
This commit is contained in:
Krish Dholakia 2024-09-09 18:54:17 -07:00 committed by GitHub
parent c86b333054
commit 4ac66bd843
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 101 additions and 34 deletions

View file

@ -1263,6 +1263,7 @@ class OpenAIChatCompletion(BaseLLM):
error_headers = getattr(e, "headers", None)
if response is not None and hasattr(response, "text"):
error_headers = getattr(e, "headers", None)
raise OpenAIError(
status_code=500,
message=f"{str(e)}\n\nOriginal Response: {response.text}",
@ -1800,12 +1801,11 @@ class OpenAITextCompletion(BaseLLM):
headers: Optional[dict] = None,
):
super().completion()
exception_mapping_worked = False
try:
if headers is None:
headers = self.validate_environment(api_key=api_key)
if model is None or messages is None:
raise OpenAIError(status_code=422, message=f"Missing model or messages")
raise OpenAIError(status_code=422, message="Missing model or messages")
if (
len(messages) > 0