From 76a5f38ec97fda5a48429abbb39c8fc921a998b5 Mon Sep 17 00:00:00 2001 From: Krish Dholakia Date: Sat, 21 Oct 2023 14:45:31 -0700 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 841936a3f..ad88e41c8 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ print(response) ## Streaming ([Docs](https://docs.litellm.ai/docs/completion/stream)) liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response. -Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models +Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.) ```python response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) for chunk in response: