From 6e6ec8c65f5383c403c0de277770dbe95e47010a Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Sat, 5 Aug 2023 15:12:34 -0700 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7f9364ccd..0677d6e78 100644 --- a/README.md +++ b/README.md @@ -51,9 +51,9 @@ Stable version pip install litellm==0.1.1 ``` -## streaming queries +## Streaming Queries liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response. -``` +```python response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) for chunk in response: print(chunk['choices'][0]['delta'])