Update README.md

This commit is contained in:
Ishaan Jaff 2023-10-02 12:25:01 -07:00 committed by GitHub
parent ade1e6e5c0
commit b70640e969
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -74,18 +74,31 @@ for chunk in result:
## Caching ([Docs](https://docs.litellm.ai/docs/caching/))
LiteLLM supports caching `completion()` and `embedding()` calls for all LLMs
LiteLLM supports caching `completion()` and `embedding()` calls for all LLMs. [Hosted Cache LiteLLM API](https://docs.litellm.ai/docs/caching/caching_api)
```python
import litellm
from litellm.caching import Cache
litellm.cache = Cache() # init cache to use api.litellm.ai
import os
# stores this response in cache
litellm.cache = Cache()
os.environ['OPENAI_API_KEY'] = ""
# add to cache
response1 = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell me a joke."}]
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "why is LiteLLM amazing?"}],
caching=True
)
# returns cached response
response2 = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "why is LiteLLM amazing?"}],
caching=True
)
print(f"response1: {response1}")
print(f"response2: {response2}")
```
## OpenAI Proxy Server ([Docs](https://docs.litellm.ai/docs/proxy_server))