chore: Add README.md files

This commit is contained in:
Roque Caballero 2025-03-28 17:36:22 +01:00
commit 4ceec902a3
Signed by: roque.caballero
SSH key fingerprint: SHA256:+oco2mi9KAXp5fmBGQyUMk3bBo0scA4b8sL7Gf2pEwo
155 changed files with 19124 additions and 0 deletions

15
demo-08/README.md Normal file
View file

@ -0,0 +1,15 @@
Demo 08 - Guardrails
===============================================
We will explore how to mitigate prompt injection using input guardrails, that are a set of functions executed before
and after the LLMs response to ensure the safety and reliability of the interaction.
# Prompt injection
Prompt injection is a security risk that arises when malicious input is crafted to manipulate the behavior of an LLM.
LLMs are particularly susceptible to these attacks because they are trained to follow natural language instructions,
which can be exploited to alter their intended logic.
To mitigate prompt injection, developers should implement validation mechanisms, such as input sanitization
and strict control over which functions the model is allowed to call.