forked from phoenix/litellm-mirror
fix docs
This commit is contained in:
parent
c404b2b1b5
commit
c03c4a4871
1 changed files with 3 additions and 4 deletions
|
@ -3,9 +3,8 @@
|
||||||
- [Streaming Responses](#streaming-responses)
|
- [Streaming Responses](#streaming-responses)
|
||||||
- [Async Completion](#async-completion)
|
- [Async Completion](#async-completion)
|
||||||
|
|
||||||
LiteLLM supports streaming the model response back by passing `stream=True` as an argument to the completion function
|
|
||||||
|
|
||||||
## Streaming Responses
|
## Streaming Responses
|
||||||
|
LiteLLM supports streaming the model response back by passing `stream=True` as an argument to the completion function
|
||||||
### Usage
|
### Usage
|
||||||
```python
|
```python
|
||||||
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
|
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
|
||||||
|
@ -13,10 +12,10 @@ for chunk in response:
|
||||||
print(chunk['choices'][0]['delta'])
|
print(chunk['choices'][0]['delta'])
|
||||||
|
|
||||||
```
|
```
|
||||||
Asynchronous Completion with LiteLLM
|
|
||||||
LiteLLM provides an asynchronous version of the completion function called `acompletion`
|
|
||||||
|
|
||||||
## Async Completion
|
## Async Completion
|
||||||
|
Asynchronous Completion with LiteLLM
|
||||||
|
LiteLLM provides an asynchronous version of the completion function called `acompletion`
|
||||||
### Usage
|
### Usage
|
||||||
```
|
```
|
||||||
from litellm import acompletion
|
from litellm import acompletion
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue