From 489d45cdfd38dedb1c823c80c39e40d4843bdef9 Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Sat, 28 Oct 2023 12:25:58 -0700 Subject: [PATCH] Update README.md --- README.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/README.md b/README.md index 49facbe2b..9bf061f62 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,7 @@ print(response) liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response. Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.) ```python +from litellm import completion response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) for chunk in response: print(chunk['choices'][0]['delta']) @@ -74,6 +75,16 @@ for chunk in result: print(chunk['choices'][0]['delta']) ``` +## Reliability - Fallback LLMs +Never fail a request using LiteLLM + +```python +from litellm import completion +# if gpt-4 fails, retry the request with gpt-3.5-turbo->command-nightly->claude-instant-1 +response = completion(model="gpt-4",messages=messages, fallbacks=["gpt-3.5-turbo" "command-nightly", "claude-instant-1"]) +``` + + ## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers)) | Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | | ------------- | ------------- | ------------- | ------------- | ------------- |