forked from phoenix/litellm-mirror
docs: make it easier to find anthropic/openai prompt caching doc
This commit is contained in:
parent
15b44c3221
commit
806a1c4acc
2 changed files with 10 additions and 1 deletions
|
@ -7,7 +7,10 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
:::info
|
||||
|
||||
Need to use Caching on LiteLLM Proxy Server? Doc here: [Caching Proxy Server](https://docs.litellm.ai/docs/proxy/caching)
|
||||
- For Proxy Server? Doc here: [Caching Proxy Server](https://docs.litellm.ai/docs/proxy/caching)
|
||||
|
||||
- For OpenAI/Anthropic Prompt Caching, go [here](../completion/prompt_caching.md)
|
||||
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -4,6 +4,12 @@ import TabItem from '@theme/TabItem';
|
|||
# Caching
|
||||
Cache LLM Responses
|
||||
|
||||
:::note
|
||||
|
||||
For OpenAI/Anthropic Prompt Caching, go [here](../completion/prompt_caching.md)
|
||||
|
||||
:::
|
||||
|
||||
LiteLLM supports:
|
||||
- In Memory Cache
|
||||
- Redis Cache
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue