standardized port and also included pre-req for all notebooks

This commit is contained in:
Justin Lee 2024-11-05 16:38:46 -08:00
parent d0baf24999
commit b556cd91fd
8 changed files with 177 additions and 42 deletions

View file

@ -11,7 +11,9 @@
"As outlined in our [Responsible Use Guide](https://www.llama.com/docs/how-to-guides/responsible-use-guide-resources/), LLM apps should deploy appropriate system level safeguards to mitigate safety and security risks of LLM system, similar to the following diagram:\n",
"![Figure 1: Safety System](../_static/safety_system.webp)\n",
"\n",
"To that goal, Llama Stack uses **Prompt Guard** and **Llama Guard 3** to secure our system. Here are the quick introduction about them."
"To that goal, Llama Stack uses **Prompt Guard** and **Llama Guard 3** to secure our system. Here are the quick introduction about them.\n",
"\n",
"Before you begin, please ensure Llama Stack is installed and set up by following the [Getting Started Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html)."
]
},
{
@ -84,6 +86,23 @@
"After the server started, you can test safety example using the follow code:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set up your connection parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"HOST = \"localhost\" # Replace with your host\n",
"PORT = 5001 # Replace with your port"
]
},
{
"cell_type": "code",
"execution_count": 1,
@ -163,7 +182,7 @@
"\n",
"\n",
"async def safety_example():\n",
" client = SafetyClient(f\"http://localhost:5000\")\n",
" client = SafetyClient(f\"http://{HOST}:{PORT}\")\n",
"\n",
" for message in [\n",
" UserMessage(content=\"hello world, write me a 2 sentence poem about the moon\"),\n",