doc fix auto prompt caching

This commit is contained in:
Ishaan Jaff 2025-04-15 22:29:26 -07:00
parent 9e90676058
commit fc31209d16
2 changed files with 2 additions and 4 deletions

View file

@ -2,7 +2,6 @@ import Image from '@theme/IdealImage';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Auto-Inject Prompt Caching Checkpoints
Reduce costs by up to 90% by using LiteLLM to auto-inject prompt caching checkpoints.
@ -14,9 +13,8 @@ Reduce costs by up to 90% by using LiteLLM to auto-inject prompt caching checkpo
LiteLLM can automatically inject prompt caching checkpoints into your requests to LLM providers. This allows:
- **Cached Processing**: Long, static parts of your prompts can be cached to avoid repeated processing
- **Cost Reduction**: Only process the dynamic parts of your prompts, significantly reducing API costs
- **Seamless Integration**: No need to modify your application code
- **Cost Reduction**: Long, static parts of your prompts can be cached to avoid repeated processing
- **No need to modify your application code**: You can configure the auto-caching behavior in the LiteLLM UI or in the `litellm config.yaml` file.
## Configuration

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

After

Width:  |  Height:  |  Size: 103 KiB

Before After
Before After