15 lines
770 B
Markdown
15 lines
770 B
Markdown
Demo 08 - Guardrails
|
||
===============================================
|
||
|
||
We will explore how to mitigate prompt injection using input guardrails, that are a set of functions executed before
|
||
and after the LLM’s response to ensure the safety and reliability of the interaction.
|
||
|
||
# Prompt injection
|
||
Prompt injection is a security risk that arises when malicious input is crafted to manipulate the behavior of an LLM.
|
||
|
||
LLMs are particularly susceptible to these attacks because they are trained to follow natural language instructions,
|
||
which can be exploited to alter their intended logic.
|
||
|
||
To mitigate prompt injection, developers should implement validation mechanisms, such as input sanitization
|
||
and strict control over which functions the model is allowed to call.
|
||
|