LiteLLM Minor Fixes and Improvements (09/07/2024) (#5580)

* fix(litellm_logging.py): set completion_start_time_float to end_time_float if none

Fixes https://github.com/BerriAI/litellm/issues/5500

* feat(_init_.py): add new 'openai_text_completion_compatible_providers' list

Fixes https://github.com/BerriAI/litellm/issues/5558

Handles correctly routing fireworks ai calls when done via text completions

* fix: fix linting errors

* fix: fix linting errors

* fix(openai.py): fix exception raised

* fix(openai.py): fix error handling

* fix(_redis.py): allow all supported arguments for redis cluster (#5554)

* Revert "fix(_redis.py): allow all supported arguments for redis cluster (#5554)" (#5583)

This reverts commit f2191ef4cb.

* fix(router.py): return model alias w/ underlying deployment on router.get_model_list()

Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666

* test: handle flaky tests

---------

Co-authored-by: Jonas Dittrich <58814480+Kakadus@users.noreply.github.com>
This commit is contained in:
Krish Dholakia 2024-09-09 18:54:17 -07:00 committed by GitHub
parent c86b333054
commit 4ac66bd843
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 101 additions and 34 deletions

View file

@ -1209,6 +1209,9 @@ def completion(
custom_llm_provider == "text-completion-openai"
or "ft:babbage-002" in model
or "ft:davinci-002" in model # support for finetuned completion models
or custom_llm_provider
in litellm.openai_text_completion_compatible_providers
and kwargs.get("text_completion") is True
):
openai.api_type = "openai"
@ -4099,8 +4102,8 @@ def text_completion(
kwargs.pop("prompt", None)
if (
_model is not None and custom_llm_provider == "openai"
if _model is not None and (
custom_llm_provider == "openai"
): # for openai compatible endpoints - e.g. vllm, call the native /v1/completions endpoint for text completion calls
if _model not in litellm.open_ai_chat_completion_models:
model = "text-completion-openai/" + _model