forked from phoenix/litellm-mirror
* docs(prompt_caching.md): add prompt caching cost calc example to docs * docs(prompt_caching.md): add proxy examples to docs * feat(utils.py): expose new helper `supports_prompt_caching()` to check if a model supports prompt caching * docs(prompt_caching.md): add docs on checking model support for prompt caching * build: fix invalid json |
||
---|---|---|
.. | ||
batching.md | ||
drop_params.md | ||
function_call.md | ||
input.md | ||
json_mode.md | ||
message_trimming.md | ||
mock_requests.md | ||
model_alias.md | ||
multiple_deployments.md | ||
output.md | ||
prefix.md | ||
prompt_caching.md | ||
prompt_formatting.md | ||
provider_specific_params.md | ||
reliable_completions.md | ||
stream.md | ||
token_usage.md | ||
usage.md | ||
vision.md |