forked from phoenix/litellm-mirror
(docs) update streaming
This commit is contained in:
parent
f29a353796
commit
3aebc46ebf
1 changed files with 2 additions and 2 deletions
|
@ -10,8 +10,8 @@ LiteLLM supports streaming the model response back by passing `stream=True` as a
|
|||
```python
|
||||
from litellm import completion
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
|
||||
for chunk in response:
|
||||
print(chunk['choices'][0]['delta'])
|
||||
for part in response:
|
||||
print(part.choices[0].delta.content or "")
|
||||
|
||||
```
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue