chore: Add README.md files
This commit is contained in:
commit
4ceec902a3
155 changed files with 19124 additions and 0 deletions
15
demo-08/README.md
Normal file
15
demo-08/README.md
Normal file
|
@ -0,0 +1,15 @@
|
|||
Demo 08 - Guardrails
|
||||
===============================================
|
||||
|
||||
We will explore how to mitigate prompt injection using input guardrails, that are a set of functions executed before
|
||||
and after the LLM’s response to ensure the safety and reliability of the interaction.
|
||||
|
||||
# Prompt injection
|
||||
Prompt injection is a security risk that arises when malicious input is crafted to manipulate the behavior of an LLM.
|
||||
|
||||
LLMs are particularly susceptible to these attacks because they are trained to follow natural language instructions,
|
||||
which can be exploited to alter their intended logic.
|
||||
|
||||
To mitigate prompt injection, developers should implement validation mechanisms, such as input sanitization
|
||||
and strict control over which functions the model is allowed to call.
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue