diff --git a/README.md b/README.md index 6048ce1cd..7f9364ccd 100644 --- a/README.md +++ b/README.md @@ -51,7 +51,7 @@ Stable version pip install litellm==0.1.1 ``` -## streaming Queries +## streaming queries liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response. ``` response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)