From c03c4a48715366fd1de271119cba696cbe7cfba5 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Sat, 5 Aug 2023 15:42:04 -0700 Subject: [PATCH] fix docs --- docs/stream.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/docs/stream.md b/docs/stream.md index dac0b08bc..5e8cc32ca 100644 --- a/docs/stream.md +++ b/docs/stream.md @@ -3,9 +3,8 @@ - [Streaming Responses](#streaming-responses) - [Async Completion](#async-completion) -LiteLLM supports streaming the model response back by passing `stream=True` as an argument to the completion function - ## Streaming Responses +LiteLLM supports streaming the model response back by passing `stream=True` as an argument to the completion function ### Usage ```python response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) @@ -13,10 +12,10 @@ for chunk in response: print(chunk['choices'][0]['delta']) ``` -Asynchronous Completion with LiteLLM -LiteLLM provides an asynchronous version of the completion function called `acompletion` ## Async Completion +Asynchronous Completion with LiteLLM +LiteLLM provides an asynchronous version of the completion function called `acompletion` ### Usage ``` from litellm import acompletion