This commit is contained in:
ishaan-jaff 2023-08-05 15:42:04 -07:00
parent c404b2b1b5
commit c03c4a4871

View file

@ -3,9 +3,8 @@
- [Streaming Responses](#streaming-responses) - [Streaming Responses](#streaming-responses)
- [Async Completion](#async-completion) - [Async Completion](#async-completion)
LiteLLM supports streaming the model response back by passing `stream=True` as an argument to the completion function
## Streaming Responses ## Streaming Responses
LiteLLM supports streaming the model response back by passing `stream=True` as an argument to the completion function
### Usage ### Usage
```python ```python
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
@ -13,10 +12,10 @@ for chunk in response:
print(chunk['choices'][0]['delta']) print(chunk['choices'][0]['delta'])
``` ```
Asynchronous Completion with LiteLLM
LiteLLM provides an asynchronous version of the completion function called `acompletion`
## Async Completion ## Async Completion
Asynchronous Completion with LiteLLM
LiteLLM provides an asynchronous version of the completion function called `acompletion`
### Usage ### Usage
``` ```
from litellm import acompletion from litellm import acompletion