forked from phoenix/litellm-mirror
(docs) proxy add info on testing caching
This commit is contained in:
parent
907bff33f0
commit
e7eb495a3a
1 changed files with 23 additions and 3 deletions
|
@ -172,9 +172,7 @@ On a successfull deploy https://dashboard.render.com/ should display the followi
|
|||
## Advanced
|
||||
### Caching - Completion() and Embedding() Responses
|
||||
|
||||
#### Caching on Redis
|
||||
In order to enable Redis caching:
|
||||
- Add the following credentials to your server environment - litellm will begin caching your responses
|
||||
Enable caching by adding the following credentials to your server environment
|
||||
|
||||
```
|
||||
REDIS_HOST = "" # REDIS_HOST='redis-18841.c274.us-east-1-3.ec2.cloud.redislabs.com'
|
||||
|
@ -182,6 +180,28 @@ In order to enable Redis caching:
|
|||
REDIS_PASSWORD = "" # REDIS_PASSWORD='liteLlmIsAmazing'
|
||||
```
|
||||
|
||||
#### Test Caching
|
||||
Send the same request twice:
|
||||
```shell
|
||||
curl http://0.0.0.0:8000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-3.5-turbo",
|
||||
"messages": [{"role": "user", "content": "write a poem about litellm!"}],
|
||||
"temperature": 0.7
|
||||
}'
|
||||
|
||||
curl http://0.0.0.0:8000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gpt-3.5-turbo",
|
||||
"messages": [{"role": "user", "content": "write a poem about litellm!"}],
|
||||
"temperature": 0.7
|
||||
}'
|
||||
```
|
||||
|
||||
#### Control caching per completion request
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue