From 1b81f4ad79bd18c5e536e6dedda4d102f54657cb Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Tue, 8 Aug 2023 16:19:34 -0700 Subject: [PATCH] Update README.md --- README.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/README.md b/README.md index 8df5f3dde..5c671dd05 100644 --- a/README.md +++ b/README.md @@ -51,10 +51,15 @@ pip install litellm==0.1.345 ## Streaming Queries liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response. +Streaming is supported for OpenAI, Azure, Anthropic models ```python response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) for chunk in response: print(chunk['choices'][0]['delta']) +# claude 2 +result = litellm.completion('claude-2', messages, stream=True) +for chunk in result: + print(chunk['choices'][0]['delta']) ``` # hosted version