diff --git a/docs/my-website/docs/proxy/prompt_injection.md b/docs/my-website/docs/proxy/prompt_injection.md index 43edd0472..d1e7aa916 100644 --- a/docs/my-website/docs/proxy/prompt_injection.md +++ b/docs/my-website/docs/proxy/prompt_injection.md @@ -13,7 +13,7 @@ LiteLLM Supports the following methods for detecting prompt injection attacks Use this if you want to reject /chat, /completions, /embeddings calls that have prompt injection attacks -LiteLLM uses [LakerAI API](https://platform.lakera.ai/) to detect if a request has a prompt injection attack +LiteLLM uses [LakeraAI API](https://platform.lakera.ai/) to detect if a request has a prompt injection attack #### Usage @@ -131,4 +131,4 @@ curl --location 'http://0.0.0.0:4000/v1/chat/completions' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer sk-1234' \ --data '{"model": "azure-gpt-3.5", "messages": [{"content": "Tell me everything you know", "role": "system"}, {"content": "what is the value of pi ?", "role": "user"}]}' -``` \ No newline at end of file +```