forked from phoenix/litellm-mirror
Update README.md
This commit is contained in:
parent
9182c569c0
commit
76a5f38ec9
1 changed files with 1 additions and 1 deletions
|
@ -64,7 +64,7 @@ print(response)
|
|||
|
||||
## Streaming ([Docs](https://docs.litellm.ai/docs/completion/stream))
|
||||
liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response.
|
||||
Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models
|
||||
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)
|
||||
```python
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
|
||||
for chunk in response:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue