(docs) update streaming

This commit is contained in:
ishaan-jaff 2023-11-21 11:20:53 -08:00
parent f29a353796
commit 3aebc46ebf

View file

@ -10,8 +10,8 @@ LiteLLM supports streaming the model response back by passing `stream=True` as a
```python ```python
from litellm import completion from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True) response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response: for part in response:
print(chunk['choices'][0]['delta']) print(part.choices[0].delta.content or "")
``` ```