From e4560a5e74eadd0c940a17f6a2014ab3663cbdcc Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Thu, 31 Oct 2024 13:37:55 -0700 Subject: [PATCH] second draft --- docs/Prompt_Engineering_with_Llama_3.ipynb | 795 +++++++++++++++++++++ docs/safety101.md | 52 ++ docs/safety_system.webp | Bin 0 -> 32068 bytes 3 files changed, 847 insertions(+) create mode 100644 docs/Prompt_Engineering_with_Llama_3.ipynb create mode 100644 docs/safety101.md create mode 100644 docs/safety_system.webp diff --git a/docs/Prompt_Engineering_with_Llama_3.ipynb b/docs/Prompt_Engineering_with_Llama_3.ipynb new file mode 100644 index 000000000..f9e705666 --- /dev/null +++ b/docs/Prompt_Engineering_with_Llama_3.ipynb @@ -0,0 +1,795 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Open\n", + "\n", + "# Prompt Engineering with Llama 3.1\n", + "\n", + "Prompt engineering is using natural language to produce a desired response from a large language model (LLM).\n", + "\n", + "This interactive guide covers prompt engineering & best practices with Llama 3.1." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Why now?\n", + "\n", + "[Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762) introduced the world to transformer neural networks (originally for machine translation). Transformers ushered an era of generative AI with diffusion models for image creation and large language models (`LLMs`) as **programmable deep learning networks**.\n", + "\n", + "Programming foundational LLMs is done with natural language – it doesn't require training/tuning like ML models of the past. This has opened the door to a massive amount of innovation and a paradigm shift in how technology can be deployed. The science/art of using natural language to program language models to accomplish a task is referred to as **Prompt Engineering**." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Llama Models\n", + "\n", + "In 2023, Meta introduced the [Llama language models](https://ai.meta.com/llama/) (Llama Chat, Code Llama, Llama Guard). These are general purpose, state-of-the-art LLMs.\n", + "\n", + "Llama models come in varying parameter sizes. The smaller models are cheaper to deploy and run; the larger models are more capable.\n", + "\n", + "#### Llama 3.1\n", + "1. `llama-3.1-8b` - base pretrained 8 billion parameter model\n", + "1. `llama-3.1-70b` - base pretrained 70 billion parameter model\n", + "1. `llama-3.1-405b` - base pretrained 405 billion parameter model\n", + "1. `llama-3.1-8b-instruct` - instruction fine-tuned 8 billion parameter model\n", + "1. `llama-3.1-70b-instruct` - instruction fine-tuned 70 billion parameter model\n", + "1. `llama-3.1-405b-instruct` - instruction fine-tuned 405 billion parameter model (flagship)\n", + "\n", + "\n", + "#### Llama 3\n", + "1. `llama-3-8b` - base pretrained 8 billion parameter model\n", + "1. `llama-3-70b` - base pretrained 70 billion parameter model\n", + "1. `llama-3-8b-instruct` - instruction fine-tuned 8 billion parameter model\n", + "1. `llama-3-70b-instruct` - instruction fine-tuned 70 billion parameter model (flagship)\n", + "\n", + "#### Llama 2\n", + "1. `llama-2-7b` - base pretrained 7 billion parameter model\n", + "1. `llama-2-13b` - base pretrained 13 billion parameter model\n", + "1. `llama-2-70b` - base pretrained 70 billion parameter model\n", + "1. `llama-2-7b-chat` - chat fine-tuned 7 billion parameter model\n", + "1. `llama-2-13b-chat` - chat fine-tuned 13 billion parameter model\n", + "1. `llama-2-70b-chat` - chat fine-tuned 70 billion parameter model (flagship)\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Code Llama is a code-focused LLM built on top of Llama 2 also available in various sizes and finetunes:" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Code Llama\n", + "1. `codellama-7b` - code fine-tuned 7 billion parameter model\n", + "1. `codellama-13b` - code fine-tuned 13 billion parameter model\n", + "1. `codellama-34b` - code fine-tuned 34 billion parameter model\n", + "1. `codellama-70b` - code fine-tuned 70 billion parameter model\n", + "1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model\n", + "2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model\n", + "3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model\n", + "3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model\n", + "1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model\n", + "2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model\n", + "3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model\n", + "3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Getting an LLM\n", + "\n", + "Large language models are deployed and accessed in a variety of ways, including:\n", + "\n", + "1. **Self-hosting**: Using local hardware to run inference. Ex. running Llama on your Macbook Pro using [llama.cpp](https://github.com/ggerganov/llama.cpp).\n", + " * Best for privacy/security or if you already have a GPU.\n", + "1. **Cloud hosting**: Using a cloud provider to deploy an instance that hosts a specific model. Ex. running Llama on cloud providers like AWS, Azure, GCP, and others.\n", + " * Best for customizing models and their runtime (ex. fine-tuning a model for your use case).\n", + "1. **Hosted API**: Call LLMs directly via an API. There are many companies that provide Llama inference APIs including AWS Bedrock, Replicate, Anyscale, Together and others.\n", + " * Easiest option overall." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Hosted APIs\n", + "\n", + "Hosted APIs are the easiest way to get started. We'll use them here. There are usually two main endpoints:\n", + "\n", + "1. **`completion`**: generate a response to a given prompt (a string).\n", + "1. **`chat_completion`**: generate the next message in a list of messages, enabling more explicit instruction and context for use cases like chatbots." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tokens\n", + "\n", + "LLMs process inputs and outputs in chunks called *tokens*. Think of these, roughly, as words – each model will have its own tokenization scheme. For example, this sentence...\n", + "\n", + "> Our destiny is written in the stars.\n", + "\n", + "...is tokenized into `[\"Our\", \" destiny\", \" is\", \" written\", \" in\", \" the\", \" stars\", \".\"]` for Llama 3. See [this](https://tiktokenizer.vercel.app/?model=meta-llama%2FMeta-Llama-3-8B) for an interactive tokenizer tool.\n", + "\n", + "Tokens matter most when you consider API pricing and internal behavior (ex. hyperparameters).\n", + "\n", + "Each model has a maximum context length that your prompt cannot exceed. That's 128k tokens for Llama 3.1, 4K for Llama 2, and 100K for Code Llama.\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Notebook Setup\n", + "\n", + "The following APIs will be used to call LLMs throughout the guide. As an example, we'll call Llama 3.1 chat using [Grok](https://console.groq.com/playground?model=llama3-70b-8192).\n", + "\n", + "To install prerequisites run:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import sys\n", + "!{sys.executable} -m pip install groq" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from typing import Dict, List\n", + "from groq import Groq\n", + "\n", + "# Get a free API key from https://console.groq.com/keys\n", + "os.environ[\"GROQ_API_KEY\"] = \"YOUR_GROQ_API_KEY\"\n", + "\n", + "LLAMA3_405B_INSTRUCT = \"llama-3.1-405b-reasoning\" # Note: Groq currently only gives access here to paying customers for 405B model\n", + "LLAMA3_70B_INSTRUCT = \"llama-3.1-70b-versatile\"\n", + "LLAMA3_8B_INSTRUCT = \"llama3.1-8b-instant\"\n", + "\n", + "DEFAULT_MODEL = LLAMA3_70B_INSTRUCT\n", + "\n", + "client = Groq()\n", + "\n", + "def assistant(content: str):\n", + " return { \"role\": \"assistant\", \"content\": content }\n", + "\n", + "def user(content: str):\n", + " return { \"role\": \"user\", \"content\": content }\n", + "\n", + "def chat_completion(\n", + " messages: List[Dict],\n", + " model = DEFAULT_MODEL,\n", + " temperature: float = 0.6,\n", + " top_p: float = 0.9,\n", + ") -> str:\n", + " response = client.chat.completions.create(\n", + " messages=messages,\n", + " model=model,\n", + " temperature=temperature,\n", + " top_p=top_p,\n", + " )\n", + " return response.choices[0].message.content\n", + " \n", + "\n", + "def completion(\n", + " prompt: str,\n", + " model: str = DEFAULT_MODEL,\n", + " temperature: float = 0.6,\n", + " top_p: float = 0.9,\n", + ") -> str:\n", + " return chat_completion(\n", + " [user(prompt)],\n", + " model=model,\n", + " temperature=temperature,\n", + " top_p=top_p,\n", + " )\n", + "\n", + "def complete_and_print(prompt: str, model: str = DEFAULT_MODEL):\n", + " print(f'==============\\n{prompt}\\n==============')\n", + " response = completion(prompt, model)\n", + " print(response, end='\\n\\n')\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Completion APIs\n", + "\n", + "Let's try Llama 3.1!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"The typical color of the sky is: \")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"which model version are you?\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Chat Completion APIs\n", + "Chat completion models provide additional structure to interacting with an LLM. An array of structured message objects is sent to the LLM instead of a single piece of text. This message list provides the LLM with some \"context\" or \"history\" from which to continue.\n", + "\n", + "Typically, each message contains `role` and `content`:\n", + "* Messages with the `system` role are used to provide core instruction to the LLM by developers.\n", + "* Messages with the `user` role are typically human-provided messages.\n", + "* Messages with the `assistant` role are typically generated by the LLM." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "response = chat_completion(messages=[\n", + " user(\"My favorite color is blue.\"),\n", + " assistant(\"That's great to hear!\"),\n", + " user(\"What is my favorite color?\"),\n", + "])\n", + "print(response)\n", + "# \"Sure, I can help you with that! Your favorite color is blue.\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### LLM Hyperparameters\n", + "\n", + "#### `temperature` & `top_p`\n", + "\n", + "These APIs also take parameters which influence the creativity and determinism of your output.\n", + "\n", + "At each step, LLMs generate a list of most likely tokens and their respective probabilities. The least likely tokens are \"cut\" from the list (based on `top_p`), and then a token is randomly selected from the remaining candidates (`temperature`).\n", + "\n", + "In other words: `top_p` controls the breadth of vocabulary in a generation and `temperature` controls the randomness within that vocabulary. A temperature of ~0 produces *almost* deterministic results.\n", + "\n", + "[Read more about temperature setting here](https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683).\n", + "\n", + "Let's try it out:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def print_tuned_completion(temperature: float, top_p: float):\n", + " response = completion(\"Write a haiku about llamas\", temperature=temperature, top_p=top_p)\n", + " print(f'[temperature: {temperature} | top_p: {top_p}]\\n{response.strip()}\\n')\n", + "\n", + "print_tuned_completion(0.01, 0.01)\n", + "print_tuned_completion(0.01, 0.01)\n", + "# These two generations are highly likely to be the same\n", + "\n", + "print_tuned_completion(1.0, 1.0)\n", + "print_tuned_completion(1.0, 1.0)\n", + "# These two generations are highly likely to be different" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prompting Techniques" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Explicit Instructions\n", + "\n", + "Detailed, explicit instructions produce better results than open-ended prompts:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(prompt=\"Describe quantum physics in one short sentence of no more than 12 words\")\n", + "# Returns a succinct explanation of quantum physics that mentions particles and states existing simultaneously." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can think about giving explicit instructions as using rules and restrictions to how Llama 3 responds to your prompt.\n", + "\n", + "- Stylization\n", + " - `Explain this to me like a topic on a children's educational network show teaching elementary students.`\n", + " - `I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words:`\n", + " - `Give your answer like an old timey private investigator hunting down a case step by step.`\n", + "- Formatting\n", + " - `Use bullet points.`\n", + " - `Return as a JSON object.`\n", + " - `Use less technical terms and help me apply it in my work in communications.`\n", + "- Restrictions\n", + " - `Only use academic papers.`\n", + " - `Never give sources older than 2020.`\n", + " - `If you don't know the answer, say that you don't know.`\n", + "\n", + "Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"Explain the latest advances in large language models to me.\")\n", + "# More likely to cite sources from 2017\n", + "\n", + "complete_and_print(\"Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020.\")\n", + "# Gives more specific advances and only cites sources from 2020" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example Prompting using Zero- and Few-Shot Learning\n", + "\n", + "A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image ([Fei-Fei et al. (2006)](http://vision.stanford.edu/documents/Fei-FeiFergusPerona2006.pdf)).\n", + "\n", + "#### Zero-Shot Prompting\n", + "\n", + "Large language models like Llama 3 are unique because they are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called \"zero-shot prompting\".\n", + "\n", + "Let's try using Llama 3 as a sentiment detector. You may notice that output format varies - we can improve this with better prompting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"Text: This was the best movie I've ever seen! \\n The sentiment of the text is: \")\n", + "# Returns positive sentiment\n", + "\n", + "complete_and_print(\"Text: The director was trying too hard. \\n The sentiment of the text is: \")\n", + "# Returns negative sentiment" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "#### Few-Shot Prompting\n", + "\n", + "Adding specific examples of your desired output generally results in more accurate, consistent output. This technique is called \"few-shot prompting\".\n", + "\n", + "In this example, the generated response follows our desired format that offers a more nuanced sentiment classifer that gives a positive, neutral, and negative response confidence percentage.\n", + "\n", + "See also: [Zhao et al. (2021)](https://arxiv.org/abs/2102.09690), [Liu et al. (2021)](https://arxiv.org/abs/2101.06804), [Su et al. (2022)](https://arxiv.org/abs/2209.01975), [Rubin et al. (2022)](https://arxiv.org/abs/2112.08633).\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def sentiment(text):\n", + " response = chat_completion(messages=[\n", + " user(\"You are a sentiment classifier. For each message, give the percentage of positive/netural/negative.\"),\n", + " user(\"I liked it\"),\n", + " assistant(\"70% positive 30% neutral 0% negative\"),\n", + " user(\"It could be better\"),\n", + " assistant(\"0% positive 50% neutral 50% negative\"),\n", + " user(\"It's fine\"),\n", + " assistant(\"25% positive 50% neutral 25% negative\"),\n", + " user(text),\n", + " ])\n", + " return response\n", + "\n", + "def print_sentiment(text):\n", + " print(f'INPUT: {text}')\n", + " print(sentiment(text))\n", + "\n", + "print_sentiment(\"I thought it was okay\")\n", + "# More likely to return a balanced mix of positive, neutral, and negative\n", + "print_sentiment(\"I loved it!\")\n", + "# More likely to return 100% positive\n", + "print_sentiment(\"Terrible service 0/10\")\n", + "# More likely to return 100% negative" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Role Prompting\n", + "\n", + "Llama will often give more consistent responses when given a role ([Kong et al. (2023)](https://browse.arxiv.org/pdf/2308.07702.pdf)). Roles give context to the LLM on what type of answers are desired.\n", + "\n", + "Let's use Llama 3 to create a more focused, technical response for a question around the pros and cons of using PyTorch." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"Explain the pros and cons of using PyTorch.\")\n", + "# More likely to explain the pros and cons of PyTorch covers general areas like documentation, the PyTorch community, and mentions a steep learning curve\n", + "\n", + "complete_and_print(\"Your role is a machine learning expert who gives highly technical advice to senior engineers who work with complicated datasets. Explain the pros and cons of using PyTorch.\")\n", + "# Often results in more technical benefits and drawbacks that provide more technical details on how model layers" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Chain-of-Thought\n", + "\n", + "Simply adding a phrase encouraging step-by-step thinking \"significantly improves the ability of large language models to perform complex reasoning\" ([Wei et al. (2022)](https://arxiv.org/abs/2201.11903)). This technique is called \"CoT\" or \"Chain-of-Thought\" prompting.\n", + "\n", + "Llama 3.1 now reasons step-by-step naturally without the addition of the phrase. This section remains for completeness." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "prompt = \"Who lived longer, Mozart or Elvis?\"\n", + "\n", + "complete_and_print(prompt)\n", + "# Llama 2 would often give the incorrect answer of \"Mozart\"\n", + "\n", + "complete_and_print(f\"{prompt} Let's think through this carefully, step by step.\")\n", + "# Gives the correct answer \"Elvis\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Self-Consistency\n", + "\n", + "LLMs are probablistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency ([Wang et al. (2022)](https://arxiv.org/abs/2203.11171)) introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import re\n", + "from statistics import mode\n", + "\n", + "def gen_answer():\n", + " response = completion(\n", + " \"John found that the average of 15 numbers is 40.\"\n", + " \"If 10 is added to each number then the mean of the numbers is?\"\n", + " \"Report the answer surrounded by backticks (example: `123`)\",\n", + " )\n", + " match = re.search(r'`(\\d+)`', response)\n", + " if match is None:\n", + " return None\n", + " return match.group(1)\n", + "\n", + "answers = [gen_answer() for i in range(5)]\n", + "\n", + "print(\n", + " f\"Answers: {answers}\\n\",\n", + " f\"Final answer: {mode(answers)}\",\n", + " )\n", + "\n", + "# Sample runs of Llama-3-70B (all correct):\n", + "# ['60', '50', '50', '50', '50'] -> 50\n", + "# ['50', '50', '50', '60', '50'] -> 50\n", + "# ['50', '50', '60', '50', '50'] -> 50" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieval-Augmented Generation\n", + "\n", + "You'll probably want to use factual knowledge in your application. You can extract common facts from today's large models out-of-the-box (i.e. using just the model weights):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"What is the capital of the California?\")\n", + "# Gives the correct answer \"Sacramento\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "However, more specific facts, or private information, cannot be reliably retrieved. The model will either declare it does not know or hallucinate an incorrect answer:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"What was the temperature in Menlo Park on December 12th, 2023?\")\n", + "# \"I'm just an AI, I don't have access to real-time weather data or historical weather records.\"\n", + "\n", + "complete_and_print(\"What time is my dinner reservation on Saturday and what should I wear?\")\n", + "# \"I'm not able to access your personal information [..] I can provide some general guidance\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt you've retrived from an external database ([Lewis et al. (2020)](https://arxiv.org/abs/2005.11401v4)). It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which may be costly and negatively impact the foundational model's capabilities.\n", + "\n", + "This could be as simple as a lookup table or as sophisticated as a [vector database]([FAISS](https://github.com/facebookresearch/faiss)) containing all of your company's knowledge:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "MENLO_PARK_TEMPS = {\n", + " \"2023-12-11\": \"52 degrees Fahrenheit\",\n", + " \"2023-12-12\": \"51 degrees Fahrenheit\",\n", + " \"2023-12-13\": \"51 degrees Fahrenheit\",\n", + "}\n", + "\n", + "\n", + "def prompt_with_rag(retrived_info, question):\n", + " complete_and_print(\n", + " f\"Given the following information: '{retrived_info}', respond to: '{question}'\"\n", + " )\n", + "\n", + "\n", + "def ask_for_temperature(day):\n", + " temp_on_day = MENLO_PARK_TEMPS.get(day) or \"unknown temperature\"\n", + " prompt_with_rag(\n", + " f\"The temperature in Menlo Park was {temp_on_day} on {day}'\", # Retrieved fact\n", + " f\"What is the temperature in Menlo Park on {day}?\", # User question\n", + " )\n", + "\n", + "\n", + "ask_for_temperature(\"2023-12-12\")\n", + "# \"Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit.\"\n", + "\n", + "ask_for_temperature(\"2023-07-18\")\n", + "# \"I'm not able to provide the temperature in Menlo Park on 2023-07-18 as the information provided states that the temperature was unknown.\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Program-Aided Language Models\n", + "\n", + "LLMs, by nature, aren't great at performing calculations. Let's try:\n", + "\n", + "$$\n", + "((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))\n", + "$$\n", + "\n", + "(The correct answer is 91383.)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\"\"\"\n", + "Calculate the answer to the following math problem:\n", + "\n", + "((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))\n", + "\"\"\")\n", + "# Gives incorrect answers like 92448, 92648, 95463" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[Gao et al. (2022)](https://arxiv.org/abs/2211.10435) introduced the concept of \"Program-aided Language Models\" (PAL). While LLMs are bad at arithmetic, they're great for code generation. PAL leverages this fact by instructing the LLM to write code to solve calculation tasks." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\n", + " \"\"\"\n", + " # Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))\n", + " \"\"\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The following code was generated by Llama 3 70B:\n", + "\n", + "result = ((-5 + 93 * 4 - 0) * (4**4 - 7 + 0 * 5))\n", + "print(result)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Limiting Extraneous Tokens\n", + "\n", + "A common struggle with Llama 2 is getting output without extraneous tokens (ex. \"Sure! Here's more information on...\"), even if explicit instructions are given to Llama 2 to be concise and no preamble. Llama 3.x can better follow instructions.\n", + "\n", + "Check out this improvement that combines a role, rules and restrictions, explicit instructions, and an example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "complete_and_print(\n", + " \"Give me the zip code for Menlo Park in JSON format with the field 'zip_code'\",\n", + ")\n", + "# Likely returns the JSON and also \"Sure! Here's the JSON...\"\n", + "\n", + "complete_and_print(\n", + " \"\"\"\n", + " You are a robot that only outputs JSON.\n", + " You reply in JSON format with the field 'zip_code'.\n", + " Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118}\n", + " Now here is my question: What is the zip code of Menlo Park?\n", + " \"\"\",\n", + ")\n", + "# \"{'zip_code': 94025}\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Additional References\n", + "- [PromptingGuide.ai](https://www.promptingguide.ai/)\n", + "- [LearnPrompting.org](https://learnprompting.org/)\n", + "- [Lil'Log Prompt Engineering Guide](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/)\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Author & Contact\n", + "\n", + "Edited by [Dalton Flanagan](https://www.linkedin.com/in/daltonflanagan/) (dalton@meta.com) with contributions from Mohsen Agsen, Bryce Bortree, Ricardo Juan Palma Duran, Kaolin Fire, Thomas Scialom." + ] + } + ], + "metadata": { + "captumWidgetMessage": [], + "dataExplorerConfig": [], + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.14" + }, + "last_base_url": "https://bento.edge.x2p.facebook.net/", + "last_kernel_id": "161e2a7b-2d2b-4995-87f3-d1539860ecac", + "last_msg_id": "4eab1242-d815b886ebe4f5b1966da982_543", + "last_server_session_id": "4a7b41c5-ed66-4dcb-a376-22673aebb469", + "operator_data": [], + "outputWidgetContext": [] + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/docs/safety101.md b/docs/safety101.md new file mode 100644 index 000000000..2bf8f1bfe --- /dev/null +++ b/docs/safety101.md @@ -0,0 +1,52 @@ +## Safety API 101 + +This document talks about the Safety APIs in Llama Stack. + +As outlined in our [Responsible Use Guide](https://www.llama.com/docs/how-to-guides/responsible-use-guide-resources/), LLM apps should deploy appropriate system level safeguards to mitigate safety and security risks of LLM system, similar to the following diagram: +![Figure 1: Safety System](./safety_system.webp) + +To that goal, Llama Stack uses **Prompt Guard** and **Llama Guard 3** to secure our system. Here are the quick introduction about them. + +**Prompt Guard**: + +PromptGuard is a classifier model trained on a large corpus of attacks, which is capable of detecting both explicitly malicious prompts (Jailbreaks) as well as prompts that contain injected inputs (Prompt Injections). We suggest a methodology of fine-tuning the model to application-specific data to achieve optimal results. + +PromptGuard is a BERT model that outputs only labels; unlike LlamaGuard, it doesn't need a specific prompt structure or configuration. The input is a string that the model labels as safe or unsafe (at two different levels). + +For more detail on PromptGuard, please checkout [PromptGuard model card and prompt formats](https://www.llama.com/docs/model-cards-and-prompt-formats/prompt-guard) + +**Llama Guard 3**: + +Llama Guard 3 comes in three flavors now: Llama Guard 3 1B, Llama Guard 3 8B and Llama Guard 3 11B-Vision. The first two models are text only, and the third supports the same vision understanding capabilities as the base Llama 3.2 11B-Vision model. All the models are multilingual–for text-only prompts–and follow the categories defined by the ML Commons consortium. Check their respective model cards for additional details on each model and its performance. + +For more detail on Llama Guard 3, please checkout [Llama Guard 3 model card and prompt formats](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/) + +**CodeShield**: We use [code shield](https://github.com/meta-llama/llama-stack/tree/f04b566c5cfc0d23b59e79103f680fe05ade533d/llama_stack/providers/impls/meta_reference/codeshield) + +### Configure Safety + +```bash +$ llama stack configure ~/.llama/distributions/conda/tgi-build.yaml + +.... +Configuring API: safety (meta-reference) +Do you want to configure llama_guard_shield? (y/n): y +Entering sub-configuration for llama_guard_shield: +Enter value for model (default: Llama-Guard-3-1B) (required): +Enter value for excluded_categories (default: []) (required): +Enter value for disable_input_check (default: False) (required): +Enter value for disable_output_check (default: False) (required): +Do you want to configure prompt_guard_shield? (y/n): y +Entering sub-configuration for prompt_guard_shield: +Enter value for model (default: Prompt-Guard-86M) (required): +.... +``` +As you can see, we did basic configuration above and configured: +- Llama Guard safety shield with model `Llama-Guard-3-1B` +- Prompt Guard safety shield with model `Prompt-Guard-86M` + +you can test safety (if you configured llama-guard and/or prompt-guard shields) by: + +```bash +python -m llama_stack.apis.safety.client localhost 5000 +``` diff --git a/docs/safety_system.webp b/docs/safety_system.webp new file mode 100644 index 0000000000000000000000000000000000000000..e153da05e9aff4279284edde9373a70fb14510e4 GIT binary patch literal 32068 zcmeFYWpG^Cnk6b`vdCg)ScaYklj4Ozug+gpDkjde zTD&$o7T>|L1{@()sm&HJ>Z#@m6)dfRc(FTdGS`V4S znnSXsATb@z-DRJ9*_Q3J0lQFZYIq!+>)UqSNVA$+=l-`9r&MPN zN>;4f<)~@uLn9O5%w@YJqt#F48=sf|hfCFdr-5bICfF(XXFVE-0^Egi%AmmYhGBNu z;Ze~M1pUwyg_ab}Y5j-@G>u4R1KR6T_dk0x#0X~Pa4hjh5(C9Q3Y&3-GfIq?mgwIq z{p=CO?VQg&<8XL?>kR>a;aFhue>M;xnQ>-&K8F5mDi}b(w8YrOI&7z5pkvWMk|Jtfk_fG{9`Ugak7u;Vd+}fei3Z%7IGwrWZYN- zj-u%kvTDhgRK|m9u!j{JOFT3cp8&Zy8%wK!x8le9HtAAVFh2aJ?Yw~k{{ui$QBQt2 zrTq(ln9IlBsOCubJ?C{hEj6qL5e=p2IdtFfvRVZ&-)rF2#wy3<0i6uq))8yS-+G2x zm}k!d?Z|PGAGkLP7PxI|blTq)c@c1`ZU;`8(R1I#>tB2c44+cZ@6C=+9iPEq28NB7 zYLCC|A$-mEHmd6OHsc)H3s$@mKor!KR>MO6*HAO=fXiYR{w>sp#*UPS|B8w%h6nwk zB!;#do&M2A6gy*zV#S@)^#9uZ1+P$G|3FfDs>O{{rvDmA&**IJ0RI!tE-dK%Z7=Tt z+c3Y<`EO!!Jcc05-@8Y7#L0z2>fgd|W}bf{f49>LGVH%c76m0b|M*RR{ms{(!6{+Z z8a5I90+I6{(17U#lf>ptUYc1mZCd9cF4;j&qLUC$Gc1-|l{BrzVE;}x%;7YG4u3}( z^1-RnX7Ru13Ym2Z&*<;W^i89-Ysq|CG5ZSy2lL}QRgL@MO9li1w5QdsZLnig{7#fR zOad|E*A~1|7haR4){QM->!F;udmo5UnhhB-E z;XMNz^N;VhzdkQGjB3^X8`k^{UjC{kYuLSC3)UnkJeC^hcau5&Nh9qs7iA{6m6Pwt zT8aetOO^Zzqg~sdJ)#0Cyc&bC_v;603`BiHpvGF!e|TIxxRKbs-^3Rs=V zNDtRfNJf(Ig0Gto;oQD?{OhDLVX)D+Cw>k3f2kXz3<|J6PWm&6<4-SJ4KE9^0xr}< zpkW1cs~-V|2ciXyVTTpD{E34Gg`rzYhb}=jYWzO%%0BKN`lxl=L@Rx2agzRdUgikp>&- zqT*~h&w;2J7PwSo>9_(ry*Llc`B0Qn^E+)UN%9YwjdGt1i^>rI$|TmWuRX}u4_2<^TdIIpkkh90;3b<|gXSCl-;m@{X z=4mr*davQooCv_qVFFW|Jp`de+XFdRm-RbVKX8P!;kw<6TLw$DZe*rO6gk)n*_3?h zgNb!k(HW!VO)5vj7Ei^8l0EsW z9b=mIozK!vwPZ2l3F`bI8kS@r&y<99ysH!kng?7I3hpHG&m7}O8yzgX(J%H_HH*H1 zG24zwBg5EN_#WmzBnF;rREbP^a-d=aK@@SxOm)F@Zt|PMK4oJ`|9E&ph#W_~wCzeX zAE8>8XS$LgVU8A~m*Sg_zgPC)kc?pZ)zt4`0V)P};~0+P5rQ2utgNCj@|n4+2sQz| z)K?r)cH5q$cD&u_j`pXX5BL!fFXZ!?a*z4Em(qVRzyG;Ub<^_k#xM|gG@xO*tf&eE zWTvn^zn_0$0^F?3UuT<_7@CxdFHxi+-Xsbqh00Lr*FeIyfTG};+7bLRTMuC))^lY)!z6xIMY^RBC!YNFzfwj&`jPS}grT;Obo zJ5r@IZOk3}QY?4mCW0lp0xV;c;pG5Q^pP5L2H=(8yy-0#Eu&QD?v>d;jylnDp<_#t zyuMWPpJCMO&JWJ!o;WFj5hyW)G;9`7Yw0;E^{KqGgKYb(#FnRSI8PWEtjx9RhC%H^ zuMtNVI7^SZ`Bpj6RS0;2RJKzflO4?;O#TiQPmlxVgRy>~upVCEBjdZeSU%;~07vgA zQSb_TOoD+B)M>7rALkJWO($aJs4B1g0gAz)P^A5HpBk$T{sW5&u>M)LRh)JLut9B` z4P$7BrldVxLKtz+M<|?$l|u>1HCu~w++k-QudQ?NV7&`b$3-xO)42{$)lJ$jVpVCI z`u^qpCZTxCLw+5{#N_jHI^ggpqg{1piIC0M>O{`h>=#LT^}&7Ni@-baH3IOOZM{uf zCmz;_n_a--H{_qwY)d+T@ZElRQB}>LJ~1lVw%m@G6$DuUA6j^6W9@wM0gV_b6Yw%$ zUX@syo`_{&^nqG=jf<~%rLhzJ5Nb#w{HC2AFkvQXG^V7=W}fD+UP<5AnaieHLjY$y9UeCPo}t|L}<=KzkQnm+~O%+s6AK_|2;@4()ivg&vsXIWOKDCu)dO} zg0?)Tx}v7P@$v2#6u5rk;~L)Y0N@5dbCo;*oO$99KV3)`x1q(6`WQ6BLA|h=x(l-; z8<9`dE7gFV9Xk3N*R>*OGVXhaYP5#5rkh<8B9F&eI?x%2)YLsq_?u6bJ^ujzfuS&A zWci=e|BevBluL^7+N1V7y6R7C*^M!plz@|_E6cA_a3?;rxSF3Ik<6QgSXrYw^#WkW zJj$6`CJi)YMKG12iOQ*AlndIe(1XR~LhqPlIGoNKK;t;J-YLIxy11Q<>$D%nYyqf& zr||3etu4oDE$46fZK~Tzk@`!0WH_Sqf0>g=GxXnxcpOQz@C-q?aMGc?q=;+WK`=Eb z=hWp^ocbQD;s5=oA5nf)jdq_Cao5#*f=`kXYwdPp&h&0qz%hoPzf*8pEkK#zG>(1B zFH0jq{leM!PnPg@kX}lgpnoQqvBe*5LfD1yty%q88Q}Q>#)Pnjx>kN84CEr4IiKT; z(rN1zsj7)tHs-hDNdFCx_geT`^Cb1px-0dV8@PH`7C+q*n4)(C!uqW_n2Z6HotGNZ zZ}89-lm>$-b_5dbkvKM-ySwVmY<^3ghVT`7;akIi<%*81QV3Y~j(yzD^X=0y(Xqx1 zn;6wo+i_C=IA`ww`k&f%lk;6`5>N)bBl54wYiqw3Z29E3!u%;yq08Z9)P9b_aj74f z|2djiJ-_r54MN;{)Zer8-bQS$prabz4T9lA)|8r;r{A%@gQ9sta z{P;^Pl9YdE^?46pU3srIV}6TH2=fi`kQF|rKl;5T7FfXc*W^2_^|xR z0xci^G`(tR?|uvNcQJlwuJWfQ$734%LqGp9kZE}HwZ4n@-f?-9uXsN#GP4rhmt=il zYrp``pO;@F1gSeD6hmIB6w@cHIUl9>1tia*STIBWoTh$B#6JY`&%Gfj{lU@EN#~>W z*R+LqRR12 z$K?_E{zv_G{mk><>`xNd-^?sUwxyA!mQDo`+Wj@@4bOiWZ`T`&H~WNPV#VBWZlEXr z8ykTNtj`zZ>SLiqVo%Ul9(IvLHdU}0!qq{Yujtp-z>4S2<1qeilktAr5p(t2e8Ns| z#16mGQ~|pK&6T%|tk@HV@a+M$o}38$Mjj~O*l1FqpZ=llYbpR@ zk179N>ge{TS=TDy(bjSOtS{*^4(`kE9QNg+$Gl^(TpJy{04|_xqx{GQLQRl@@V7*!e1Q?}5YNF9R%CO8MpF>q0u! z$wyY1vA-rl$iDZ@C3HN-Nz={TcvIc#y!hs>(pdE$E(>;ST1Q}J zWD*QZb~L(~ujen!FRWrioRX^HfttI|<4BMG9@R=+1c=~Z-2T_loP1CAGs%B;x|HHz z!|%b?18>r9#9K>f3v-SdSvmd3QsE!ojVT+Z46A1}CcDcAQD(JkB|oy>E6*?L zlyk7-h?S+NXc2_1fFp*`qhzbflk9ezpJ-L&#-udJlcg1IERAwQxbX0nK?8b2l2eRr6 zLb4jE8OMQgAKZ35!6?w2VWjKpPo{nm^KUC&=jWLJUq$^#_Rf&(vzly@CnDFroZQ$8 zjnU=>rBV6jl@`SKz8>sv+vCgVtJ5CxK&FX+qxmxdboAz&N~k1vDL4|ARelU{cs)Od zDt^H)E!~Ngs$2dfA#6&u5(Y_23=|zq51)<=eUeBRwedS|KMjRHjf%e>7iiaLF8$Fj zot3WddgAxCD<%%^Dwcj*5bqEggd+F1CkX!u0h**4BL1^|_}@Q&_yy#@rsUuLRe)0| zQmHgdayl08KLhsPKEfcQ)N&Lfb0Gfb-kL#}O)U48ThVlVa;SepkI-nlm!=5rFMXL2Q>k$y_S+ZbT>SX)nRK7}%Rg1=t5=~a4+4C7^^()W? zE8q5HEynNE&p@ZnG(M9t?Ymbv106b3H6>tNwavBoICnqm+d3m^X75X~yV4l(4AMIi6B4g_5oJXSa$k*-#oI{`&0;$b-|a?Bht+pg-=kOd zf=(g2Z)nFn&R%$GpH_Yv`oN)BS~Hed+Ks2lBH{wa=RGquBvS>VY4KmyvMYG*2`2ZD z%^(hEN2OhlK(JN*awIwoK_EQH;v;q;ib_k?5f6a>PFjpimL@G*07ztv7WPLL$J}Cm z1;Jq$(~k*Jx11})I8OnF1kD-RX`8%$kMPy8Fq&sm> z)z|&PyY?XD;FPRCSfw16-+Xcj9GPw!Nu>>YcVJGcC$;?r{5D!$rR$^m0 zxav>wI4!8Z)yMCmNmpx(DSs{inL$b!iyehB?3Q$@-MGh$Gj-c_!eWC#+~|p8^yz4v z5w!$qZv^yH`v}L$S5>}x@}^!3JK@Un0|$}7^`cpS{#wH!t{cuL*_s+KEB*c)`L7jf z=_SwCq-OZpnTZr@LEC;2ZBuF5>rDku&&gI7?H|y9<~!hG!nW(<#qS~ID0Ne)C<|lP zL{>aU1Cj$7he(DR%!-E#Z0&i1cUT)ixr^&~9TF32+VGYqLUsI^vKHi2n_mk zmSnR+?_r7w%T_E3Lv-B&>AeR082rGO&*$PZ z#35aiMz!n;)@vZsuqTfVJZXDfu1h+BSTH)j(ANm)o>`1&b5x%%!^IFUyvOTClft$V zTn{ks@qdetU3Z4xwC2ZFIreGo|2Q1#Pv-~Z{9P+?E@q_J%C8A^R+Fe`G&gby!x$_Izw6Pl7#c6DnKlj|seW=A%Cy*ChBu<1UT;BJtdM|nrRCZcK zZFdJw&~TQuCi}0RyqjSwSpKfh)soiim>;J)<=VLth030Gc{Yz62XVzN9&o_0&8lZk z&SJbFPsOdR)R!65u94JLezfAV1AzO^9pk{v(UgCFgEZqGLJcP%O`*Y_s?TA6)cD?6 z1ylnko!WgJD$STPqJ$O7AfBWOwi!2eCddxn4e@cMe#z4FYJCKCsLy_+i@l@-NEu<# zSTy3PG!_Au6-aejiUfbKwICm>dPYr81}Bq}ihKp8H&ic|#9)hDHO5fxj31~77&Pe9 zSg&%VMxyniH5$o^+v!E>gmtS-_Ek`m(u^ByFBxp0xS~stMdJ2)#|H^nm$C~zJdK+j z4!gAj1P@ zbVoF?NOMM&>`3+`*W&K=em>)^A)FmXI%)X)iojO2r}>>7&P(q?XHXW6F3IL~&T)9@ zTsvSIQHKj1Vt?RZn~(n9oC`kH4mIW@!WJ1gU=K1n2^5}SAEZ*7#Pyr{Vp=6agNpwa z(<=2gR>)kcY3DE`$tARqSys16L#x-D~4_!2m05wv*7hCUd084F)I@o3h!) z$l6DWj#H+XqGjcEfxStx; zP5HqD#eryXQGU@FIV`M%cT8Sm0(k&x+83f>-tOF{J61L_4Rf69@(hX3zB)q)Y!DK8 zWCw8qvn2*n8cGRYgLWz)hvUw}mWDxToHQ(I3nF3Hc`m@DDitAK<>Q+L2imyd6|khral@U#!<*M$(5k)WsOKowljWq`k;U=fI&8 ztthKV*CFE)oZnFUn1=-Yn-bCrz$~P8t9p)HHD^@nPU^ptmlvrMbx_`(@pdY5w7ibx!K?> z&RbSyTw|#fd4t@C`c5D;XVcS?i`P@-p%=%wzwj94jTUko@I-6CI~pC7_d$wWXlNo;b4i}6 z1Uq}*j17{ck?17GqnrVF?TYXN)lttqlU;>Tg1%yl7CYEd8pw8hQPB<@-j+}{El1FZ zkOD@!1o#9PMfYlOYk}*uiVOi7`hnKbfBIHj==5;MmGoQlhoM~)BAyhodTNx<;T&8j zO1liXG>4oIqZ;I(yxhWljWo8rr6O(+meGS>cK`qibi3jZKLj8~W@Q_baaxy&JE(Gt z%-|vh!ZD#Wh7PACwPar^>c1M`K>Y92g&j;hJ zY|v1~Rl+~;?UzPQxoEW!l-hP9u)+L}rTd|$aUMmkl;WWi&E;DsUmuhmGW^4U4U4_I|&m=32? z1vKUo0RTYii0swALRjROCW^Ka?{)64BlrN|_{D?Z*pEO(7)u*&e?LvN(@n)CyQz3m zR}6dx05C6udGY)1AJ1O%o+y28qZ6?E?XK(hdoYFBojQjfxw5hV0N~+!#)LtpyBiw- z$hK#j9-~~<&6iH@yHuw>3=T@vn6@YE+Lgq4MS)n`z6R^U&x@?n(U^~_nURoApXwHi zebJQM^tFJ=Qu&qJp)4XYFRE}pZM*ryt9HVDf)8_Snh97)@y?-rOY0%MDyR2m|9zo( zjs#^mw@owmGO2e}Q7C2QfP(iO>FQ<=J6dP%m&)86Q>GX-p#G{Qx_b#R1fcZ1j=YNQ zv_`EceSp}$)`PS&;RDP@X&GX31S_>EMF15oz!zyPfj6vCfSQz-a@Gzb&eg+QQ%i1t zP7?njdpRS5&(Edy@hly$s3QY(MRs`U343in%cvAy&%t*Ii-Ic%9q+D?IxAH-cH~I9 zYxA(!EH+6w)Qx>CRJlrE;V_kDknjc!{_BtQjTf%bpb-(Z9|noT<2c!APoe94aSApI z;PtaW(>H+>?+1aym5FEn3YAVpwB(2~dKMV)omgYNSI)gZ0T|@vmy@${r%WJI7!y`~ZU+wIAS{JWT}T8` zs$?5h5If3^8!O025xw=JMXOtPSZ)f9XEspKdGa2uWQN(k+j@&##>@R}w0$oTA{epm z)X0clEFRpCL=Yp^{;1D7+6F*r1QnyLcWBZyIcvgiD;nDM)Y9J_hg$FsR-{WQvZaILn4_WZWp>JvXQ8(}4V^(2=;C}shu z=vhfiFP7G?-$TZ^ojsNlb-q~YSdReZ7{)#`$|DEji~KP8HcJk6Pbt1{l6|4BlEnM9 zG0@}C^fNQ5soLPA=fQyCNWun+&XH06AXPj2UPF(Pcta`%1r7EcMYZ{*B&XVAk^XGl zG&@SJ^tI=CAItWAEfH4o$V`zxnNd^w6=T#6Fs~?%T(Uxs)uq+CG0a>G!on^uwSpf< z2gmSWHqan1wzK@fz66Xwiak3*`PdC_LZ$U<)!addUnD3dK~^Bw)$ND9U2xeFw^x*< zhYqppS0@V{H;F#}#=CF3#(>`$3y0zh)n2@Y*xV=FKBin3D0qXiZs}xbEPIdTe|a15 zn4)w8>BoJ;3~pJYdsHZ^JE$RZodbzj$&SN_2R4?IM8Cnccm>9z#SA>;doi5FBNBGx ziZTp_DN^SifkKhYkfajM-W*mf z*=pl;B3|_wmr$Afi0l#}cfHk-DlQ{Er=ymXrfM2IEv0HqSu<}9u4opfFkQFqE z7zKsOwDd(*OmevC6G zCFcOU6hf-kJ<^H`uHx028lwheOecLYh*8*@XHi(aY=*o%gz($64{APn#k0`^;7ao` ziJn|rLwvHe(=0IJ6X#DqmPwK`G|nWMg>3IuUX~qtu#VTkH1C*4UODz+H6}@DN4Rac zT+W`bX_K{Xc7Glgx<3OgHeQRabBBcFqw!Gcgw79Sf%)$NjtyfQc>J02_r7uv;gW*> zhZg+J#tJRIvwlk!2u~*uo;Jzcz-@bLK6|WIelFwWc?LUzuo^WZ7(N?C0&=4QcNJ>9 z?tlz{{NdKn_WmHBLhkB=qfLNsZEMnmo%Xr}Azg89=5~D*gS@ZA?NX*E3(;mevZHhs z<26Pkc9O93Xty}9GsBP&HkP#Y-B^jwyWGHE7Bw&hac~CmX4E@v*1JjeBRZepN)G3^ zQvkg@dLe8+!L6k-LSU(sP?{VxUdo-U7o}r1PXKTu?3RAQ%he;g466io_Cno=ImLK+ zHY$@)L&e@^J#SugttUqIs<}JHTi|2|7szWi8V0yvQ2Ov;CD@C_P%kpEj-_s2|0`+m zM!@7gm1nN0G>-PUO}oaSxDKDcGA|izl?@^R5}tR*6n}E zo$}VD{5*bL?CGl7IJB^?YNW_A<;KJ(5m8{d_~wS>lU^%@EoQ1 zSyRHT??;|4^+jqg2Tp89@hNw8-i4MCk7Ug;vKLS$3d(}3ZKPR_xrT0WB~V#eOwbh z_TY!7Yqy}LxxRR&+!pZ{1|wL;{!YU$oLx!Lf=NtJQ~DF+Z>Fdk+n$Ebh&-Rf%IjZpz`DOeq03q;*omb!Vm?vnoapTz z&P{Po%m{;YoAaS#u7kr+(>Z%VIW><$=dIL71Ju5vtI1nQ%=?R)P_GV~m>|J)binj{ zUGPpV=SD!zHlu#%P04?(v6|LdcYh$i_1S0;!yX(Qm&jK3ZyZN3J(J10&Pd_~}u;oRq9rT+0qM;Hf9G}<0^esv2RLGm89jixdVZ-6^Bgx zVQoc#0@MW{!uny<)hlqId+0b_?V>Jsx1(?@t%h~)G_|!_pWTr-D)mRB9{YU^t_Bg( zOSzR6A$|xwv*m_<=VP#DdcWvC>TS%CW@B0>f=<{`iEBwp$aHWiqQ#gB+Hnga)T;V=| z5VJG?Vj+oO*~DVx4wH4;KvP?#Y5z#5Rn@d^+?@#2l%-LSbBcq$ESD#b2-ueBJ$j;e zHuxs_bs4iH?CvISx?@!qz_Hq?ykf3kZ;)5@RP9Jf!nAff$Wz!JaRoGU`}_DP4WV0>+i#2eb_rS*t|XUC%LPpl zGbkWuoSV3t6ccbmXT|~quee=&@}8vgNr=iWFMzSnD)wn!_BNs_VKWq>h$kSKh=oV=$hbmu{&DkF}|gM z@84F5mG|^z=YiPa8aHTX)+e>l)p-%ve2|^FFL(v?jsSz&~s*&riO1 zd`Y~?;)IDG@UYK697BkSk8A!OC|Iffvp7-8(PA=J9kI z$q@nneAaSezsLpKGTfRORWiv+8rbItKHslbj1}ty>G6!NBcJhvDH=$(c2uZ)WreRK!$YDcE9IpZ0 z#T?%P!e|y_ugUtNcON$}BjNTD=IO2;n_nGN*pVd2Kwx=i+M$oV zhtx;EEP-ygOb42`Jb`}j9=Qnbp!z^z9A zlkgLiJ7`>{V;Dk6aYzx+k|K4A3(EG=qtT7ihgYApDvbTu(!V6rgcyUBPL8#AoVPkL zL|kxSJs`s|pRfjkrrmC7lyp0US1z=JWZUa|Za}uVh`X|n|ndcJVMIs<7+J;7uV zRxy>Yx-=G4I|9lqk6PZFGYo}3Kku%qBShP5D#K*kUp34q%US(#NZhubFou_7d<~$X zV{g6mHq%8`@8T%W>t!Y$6x}xa{=Qd0)qapj||`-GJ(o zn!tEzR1JaMYaOJ`N!zhn~6BgN!6 zrrJVzlTIq|G%!8})vqD5Bn1^vQogkvBH_A;wI#HI-~b7z;QcZ-M;5jVGQYiv3WX6X|99bAiXFtc8eU z43))bcHo&M-cgu(Jx@35oxNt*IP7A3{tGp`ADPGTBD672kLa-EdS_C6gf0q#;F`2W zidy9y9DZ=Mp)<@op5`C98Z`lo$?@q-i-$0TK#a2MlYnpt6?EmD2CbMss8I7WALRhl zK#j!$r0wyUvWGqAd~AC-@h~X%GuZiNy=4kwg5cx?$XwvtilUF6;jhCA12n%0iP5`o za)cf_0&71d4Uib+v6uP@s4e{YqD`u%@TeAD3f$D8LM?$kJV^Br6Q} zP}1;FB1RAxKZVguK-y4fp*I-*C?5ZEb9f@VR89i*>8R#j_f1^~CH1J^&3aSI+lw8k zM=LRwOOr@4x5#!sN)lmg3(F_Ta;5Srp{5+QL{-(3y;`~4PuH9cob+m`p`ZNmkuP*O zZb3M07cY_J;UzQ)>r?Pl!o{+Pa_9)Dkdi+QNV|&Irr4q6LG_5yS^OD5SVv+Or`fy_ zE*}5}WO@BqLIDtVVfT87gixFp$JyvtQ(e-8$^J`nqm+!oj*aAF<`YNT_WsUqlwI(- z$R@kWPT;slLpw;&$U2~=(TMtWEmi}tM?Ttf6bB;w7Y*HPv=!(813PYj-&EUpU;&AN zt8j$+>k?}3+YzZMIrLg#aGJ_zklUiax#Xz@)r2s&7s*k5bewftZ_-Pd$`Mr z!qnt0Gv9_{iS@2P&WcQYTn>mTr zNeDO<-8@(sLVbNve}X-rw74I%z@^zV>La(iw>2orE>T!6jF{o{j)Mo((Ui>e_ zQ*qjNlT*31-@A^=>Z1m>^_^u3;n%hAE19Wk1;C3Jr?>`nSr8-}b_n+;N90l#En%qn zZtsq&@#58CP2xy8G_9>|0(J_|&|D_MxServiz8UVd z(`!iv%Ud?^*jWqlaK+-d)i*GIm0bM-w_!u6Ot7UNW65Z77-cnwJ2LM`@r|ko&JLV#5>LugIiA2@A)B)KV8i-FbbA5tc28 z<`3L8YSc0o;VrM506Ke5>UG=-aMvPoiD( z3>qqegUq-tN|STnm{WNs1w1ZA8(>r!Ma-e=>2MQ0ONlAI8H#J%)Y>tF6iLeEaCi;R zZhDpjOWBt|Ph115>EE-wR6D^oN3tTiL=Atc;?QmrCB4^d;)k^eA@=$}j^fx)x~87g zDIE#{O21s%&Fq1V-UQ(TC7Z^HNo1fbTRDJax;$8m`?h*&#r0)z{So=h!cr-M$I$ia z$$waWw-6iH2D@D_wfmwbp$7JNFVt7&p-k=wQPU%NW-P`AK)?<9q1FXh2&9l_o<6@w z8bW5vL`H=mNK3enD&XYP_v2No`|}t28PRUC^<7lu+(%YPY3s$H-R?jtF(9)UmGnwi zsz-B%nMZ3f2zZy6me=F49+drX0Ba8R^ly}gcK-LfcE$iaMN|ta(>=L3Lq3AO4vlP? zQPFr3yJ^99u3?VriM@3rjE#`i0LjOO&0vE(jFjLu7P!f4l&M8nee!nFE(I#00wVHA z{!8v+7&felYCt2OCZLxGb|diD#uz9w=H10JMN))5a8y_u%!yv2e3l7Q zKqz|tc8And^9=Vqu~tZ`IpFwm*>z)Nfx~uz5Oq}E;hG~}-IWC3q3OqhCvViJO^`e%I3lQMh|c$4vXJv?K*tE&s&vyMBZNvbnkT!|n0ikLUmuU1IB1V0y~l0DZ3 zoz#S!I;A46M*>Jy=C~kpO8`6w)2&g;kZ5xXck-FBQocY3mgKh4%;G#pLem(>MmD~Y z>|OzFpIb?}i~(HBp)d7sbBl9n=E-HKkkbexH6N#|p2$uYx-qM;^v$43i^5OBzO@Qk z>`nt4n6frT{SbM zwuAy@q{>&Bv5W*Qy8B|}NX%JH5vt>0zE3AoYR9`6UADl~tf-Q>n}V6cfBMRc2t0r^%5$YV`C-ZS8l zu?u@RRTji{ecL$=3gHQfk+;KaP#CHV|2vx8nI<^zGWY|e=D5VQvT@#>n8ad4*Tn zyf1L=yh!Q7(?maya!yIL)1;9`_V*UU11vyR$H>@1h3)|)SPkEl$h>Ujf*q92l{;@> z$vJ&!jTN>|-92bv7s@zZJrkuwl*pYY0$FPrcaK~_qm*NI0Rc6eKb|tc>_Ub%$bO(N znw{1g!)fO;hf8@YgV=%(gK9v64m3k9s?kiYjhUCL5ejZrXpno2Ci5bIbAHvHp^6ZPsI4y;>W(WV1fAhtJCGW(YFFq}qA3zX z4fXwTbuX|dX1i%DMi4HX6-6|zUB;00z6_Ne31d-w=_LV9u%hNGaeqNiW zFBlV!_C>K*^m>pB{z>W)p7>%x4ImoaPP%SOnHTjR*x}IZtX$XIsy@DfzLaG7G}c)) zoSZ$>5(cGL#BU+8pEh0B==vIVP~%=f_YSKCNAyA=jJpQ&1D8z zyUgNDs>%)=uS^dPa`Ix1Qd3?HROh60KqW~QDq2kEu^^d%W5|OrR&wH6bz_4m3a*$Q zM6Fqzs8IEwj)-cOFMhbz*9GHi=OM64UYG$n4#2Q9QQ_kR@wtq>K>8>*7oye2Yhp7i zp`nUl(M$H70xWeOtK&N|Dgd{Z-ZJ%~7^+`#AKzisZBz`l%UxqDvm{*!+Xv9C@!^i% zkbs?h*S=vW!HRCSv$y~uf}TOV{#NxzRRp8HOf~Tz7nrX>dQ7pLuk{1pi(zXjLu{K- zg>;z*pN1kZvw8vyWRJ^CgEO&F1>rh{r8v3VJ~)_ghf_LO_!2}^IuV#OLYLKLB~R28 zeR~aibJUKAPA}zLg$F1HSBbIaj>eL1uxLf$tU}SARn;&gfU@nyVK=}mg{1keQ_F{T6A4>VKmbf`F>V#0n!tma$c8Jcy&65Ma?{2;b`|DImh9KUxer_R*NQIKc4xXxb`{qhrvR_*68L zdl*;ZaEFotS!ysbowf9)7}^*r&)qgxai&>h91IJcsK_X4jRl~dB01f@T1cf4&m^H~ zag_@01m0R(Mp=Gt-OXX^XS{Wp$g#+|A>%4%HK~oMdAH|jYNM8cRVz1xrn?r(s=hau z@lxZfx?0OGu#PU^!{F5ZyFx^Lg5p((R)DtQ847>hTeDrp6OtEk1DpyxM?wp0 zY7(~oJbhGiSxdME#s@HKH&WN`iQutgQ$^-%Ls(zN)8{GD=@b^)f>Gg`=X&5g%f7qaPGT;0SGsW*VN|G6bsT3W-zjjy&F<}?!I7~j|BW}fuvyQl)~c+ zv0r;Mq~w)wA|Nr;W*3VcV)UcvK`^R0UDqX)hchh9#rl%#=M(s#MWc!z%u^F5LU(Z@ zso_}FtM6oYEO2DJi|-ea(VCVP??Gs#43TQZPP&#>eDk!bb+F>dwISn36x@TvT$mp_ zX5H|IY2lX9&?HVh)A8Vnh{Qg3XGQcyV$ndSNw4hLQLB;qZxui_{5Y1u=;@J*`m6j8YCTA z0!8BP_`?h@z9UJ`X#iX)j75 z`SCB79|!zzX#xno>7MDeOH6JUkc)?Dpv-u73<)NEk?j_u48pA)1VS_c0u4on+QfltrE4k6W(dMMsO|zKION|rh3HVK zCo^k4GnznX9X%)X^KmMVNImM1s-1$rp%S$lyr~hong1jNILH}ybQw~7t zEtEyf(gfziTYF6VuK>COA`6-HX>lYG+8mSA`;OG&`#6<;Ql-Z3ArIz1jFEkq z!v$Zc5h=2jx2&tC8}1P}4Z}G1m@f(LeVu4_IkQ0}bsmBq2Uv}x*ohEesrCmlCWvWK zznIt?Ft@TEfa?Olr853sy?jHECcw7lU$$*_*|u$V*>+WzZJS-TZQHhO+pg(*XEU>V zt9KTe5oeQ;C%(kd=gUf3Om98nl3{jpK~dt)bk3K{61?y|^;l1qADevBH3_~aRdm)} zWUvV?)y}jJEk602q|iMJ>((@G8jB>byKVN=+B$(^k2;wV5F#D+yI;bJCFI(2^|w`mOlrXt8au!WgC+)YHTtfMPnSaAG- zFbd#_^PAh|prIw#Y>Qbs`~fy&?P67HHlr?voO=JkM;JyfN{cqSwws(!%tcnMVy$ z(D>yk^8@o|P$57)zt(f3Fgc;GExr3~4azYzA0+PDh<7X=g-f@KI`Lw`Y3+`(azV5E`%v2bq-aT!ec+Vj1p*$IrkOvchm zn6aSKGOjirTLgwl=yrDFZ>QG431a_|bFsdpXP1$T$k3>UE`soPp{<$u0ulS^p7ON2 zEC|2sW!+(NQoPHKTNu`@;+M|k#Z2&|m>mpk^U(EOeAbNZQ%Wgte*c&mcT;{TW&g=ZrMpd~T(qCtcU z6Nn}*%s=zU<%0pWL8SvXKBy_KHN3J9Z%tC&Q7l`K7vSjOLie+-R}k9Lv>HHneoylP zDdA=iR$S2i%g;QVIFBhs$>nnyf9~)Ah;J1oOaXo9!x&>B;aj;A(5ubKmDN`wS7I zQNI_gO9IY>uMXFxWX8hwSOuqK9I_!)`?8LBf@=tcBP3C%H)s1VAZ$f6af(L{?G!> z$(@rjUTX}WC`?>9-+(WjTR87~ue$4;t&q~sDsMyhayvyOX)iKUONSaAkjj_R!DqFc zK0oHrjQi%>*J8#|%(Cm-o90(~Gr>FF`bU0-$h`8M+y#{VZYFI4n=24oPPlBQsr;gG z?bCG#2|a>@2M`2W>5k^2OV%KBE}c+EmcJe-64CM8Z^p!gwg;m2j^$A)iC%O6qA8o8 zWu$WRuKD*c7+z4D3W4~D@j~Brn-NdTtx3=C!)beDKlDb6kS4SZNdjJ?62yi?FfL&r ztEtsz)z|0X!8@5OZw}}@w+m!f@ckvvZ!oO8j?Ez9AIk+n(UeCY&;>-Z4t*0D}=@REGQwd(K~fx zY$#}3HZKOseDRm2f@zI56D8s((3lw58pPkuQcsf()K=VYo264vBm&$xhR5v}t0Lid z#Z?#j{@0$kDhb4oT@WI8AZrpLgQtGY@NI79TtNKo^G0)-E|CJPoAkcK6Hxu;&Ty7> z;Ml^-H@@9nOSr$uJ>GXol0-gBG_Fx~Dy6NFHCKiaBZo$W zp#r)$#~}}OCJ;!In>I|`>c*sZ%ggXq(UxefkAwdG{`OI_n_@M`__SSz?;#$nucM`f zC4a-D9?B<~mnXO{va7`x*G4nu7DTT z_ny~WywB1AytPZxBnUPmAT8;81XcieTAG#|_Kf){@UJk_>uk+AgsJu!72C&5c=~Y{ z93nwhWsvFHa=^4A+7P9rw131{>W2=30g7=qhV6X=E#>x3@aADO(_fNXQiUp~wvKV} z_9xKlM8-hXPGIhA*kgFR33+`?pB|z`$)3lJ+qFb2Da(o{$rPI@G&zZ0KuJ!W&4Xxl zckYFN9qZGYB&g8zRGJLV^q2coRCz?#YVqXn*#WZN5?=kXq1aK7LjSYbr%@D1>l|da zNcnUZ1~KMRp{iSN1BtT87`vT05;pH$6Pl`evKWMYF~d(EmuB< zM46iFRHb~?=;%$6XsVySmb2-u23bsIz3{ zX^Xo(-ku;l1gvA%?<p^2{^lCcvHWQ<%t)yeqmp)x@5K@&Bb*^0_|(B5 z+{wUOQSm%c@BRS|DGDE}pHl36=}#3-B7k0|*6y6=7qBshIUir@>Fk_#qn0((h0$&= zL=g!EN@C%5qz{I%`^K-~l+~>fz5@1yOX*j3o|7&tct41E?NeC7Q@_l0kJWIw2DZS% z)da+^UP@|x%0n?Yn7@6~dwKvakXDO1J_BRe=KSLwu#Y2=qn5lmOoe3?st~gRyKFY( zL)MX}(GdJ$)XFWHoyCj4Ms`OE2w2+zrgX3z3)RN)6N2=ZL8I5T)n7EP`pMtkjTkjeoJi!CEABDA+9y zH#KmIJj)nB>oo7q(@qqEk?ZsneK$6}g(UxO(!hQ9i)<~3`laU6XC7ru(A$L^u0F>6 zkAg}IMRs4rZUG+;kv*BATy6fme(i1eNQeJl{;n`nC{&1Okgz(~jkfRPVeL@Fs1r_^ zr;kOUu9Q^Cf--9muBMPOGcu$e_io=m=EfIHMTY1D8-c&&&6O6sn-h((`&h2f&3KW< zR)fchT^0RqH)Ai~G(k*Q1JdkFbZ`wnH+#;ZIFEqB&6;lVWlreIv>07{z zwK6;-mmu=YB8E=@o_m3G@9|guI-PHCK4ezTGh!>^uZiHQ=Z*qXKq%pudyVL9!ZL$J zM?T(efEUb{=C^NNZVgd3e0aK9w7&TY${1y)A!gD20b-{~m6@Gtt~gMoAIE``p#yb+ zLGv%;Re>CC4#8ZK-ty*ZTv#d;35OI|KhiP4<4WH;>s@1H^`@2UMPLH!Gd+VR^Kbq` zzKzhn8`IGT>g?^F$F0s6#~BN5{qsyC2D1(HPh42S$KDIvS{3BWC}1S}XuOn1}r%7Xd;06y&ayofXHq7J5{=mSdSF3>VXS6?O9 zA(;*hcq;0cD`&C8zV$E)6ZZ4IirWEImwsoRjb;<%R<4f5@@4GF47E%WX;}V12H}sH zbI`U}%zH`kVop;@r}d3Zcx&dHIX#6(#u~^JmU#XQD(sSEJ*2D6!WsdXnZte_ z*6p>_-A)L zhaAa)z~Ln9;@F-pLxZQ$3E^yTA8q$C&3^^(+;iS>?;#|Zr@g!f_vn4I>#otV-p8>K z2Ay2y`3w&`ZU6+XioD^eQ6dZxomt?BwEQ~j)2mqDB0QdSS1_S^F1GO#I(A^EV|xIQ z4Jw2%JU*YUME58iUet;o2oh&~(FHhXrVQdKV>Pz7+wWkrWMxT^qeGXxG*^Nx8QH4I z5YkG30Csl{Cs8T>3Hsc$#3EK0g9uUCy1W`ZU7c0aol1#y)8Y#Q0hT9L1t9L68%O@L zj}nEhf<@l)c6Yd%?>O7yDnV&Rlk{9oK<2$5O2c!xLuoZ5dFGd{l`@vX1mX^MJR8pX zYNijIa@lHt_5?be-W{&JkdyLnT}Q}1b#k)Dl^!iBXG}zivK`p$uSad&=8xt}d_Czy zg*t&yy|O5YKuXLw^j1#6V4w>38AA+f&-JKmwAPRv9>NTLrx3p4F4t%$B^iQ_@}glg zC4T})AJ44JFsy$p%|bkz35@Y5Rg!XNtq2$!pXjOO%#UE5J$fUJnw78gx>d@cR_C8~kh_0$|2sR5j(3+3oq z^hdj!b6z*2m6h7<`TCgdAX#TedC%E&McTvopuh!I^?f1ebSigkcd_<+t}Umd_b!H1 z%;pc0YpcT=TTTe|^?^`C)yyLa?9jd8Q+lL*Z-WSDq54BN)}`kLU?uaQ055DjuT%^kgArp5Oe* zd}6P_YG7C5^SalfQQ9Ozu+X~gueGjHDt(sU%EkvD`5ym*9d$ET9+)YV<;-%1?Q6aa za_)l#kSp)qix8IaG+H0H$o;u==)R2#t{A+lD}pJaL;H-RmwXg72jA0wczMJ|^Hb<5 zQ5>?Z4%cQ^$jy*Wmc!k8k4D7U4X=%R)$l8e1fXGNm-i~1&e`pY$Nd@5HAcTlTxB>H zaR6t`57Ll8QV-4#kC}GF1bzfmHHEGAmR8iYj05xEN5A)IlPPri9uwyIh!APFRFXni ze)SEWXw@?*$`&b!&T#LPz>ckhN05dIW{~0#VFr#D3~}v4Tj58hRyGc#IbcaRuqzHM z%<=O42Hn7P)OXv92BAr{v=+o!h=;;jWfrk1araq)c*r{x2(D5fxqsOApP2-WklB2_ zwGZM=MU*>aD8a)q7jQ;zg-D(ddud`CEkPRou#q8|b&TRbyTc8vB0{kXl+egz=Q{Ae zotVng44(Pbq|he#3rYOe&qoX3SfFTMqUAJBvM_LS<<$LRKA==@Eff4FEt()^%j53^ zA@zK%dEphE-9!;=&%HP2^y?22TGn6@?df*dz`o*NKJ9@13R~v2T(LmG{wt8LmGa6w z(vPlz>oszsndM?%qyNGITJgDP!%BKo&JSp}KdARVx<3qg4F8#NqbU9h8DNEvP-=O$ zUB@@JCxdnvOksYQ3jKbba>m__EOmg8{<`?Ql6m9pS%Clm)C75Zvpw|x`LoU`V(Z!DK`;mN19aG>A#W`Zguui@5H#J(siuL6$ zCu&?J#Wu10*ESJd4I1R^N1>jfxmmtzEo|3_l>on*$R@w3A0I5dbN6&%3mHtut_|(8 z#a5e3GmCKS`DIAXr$7W`+p;exFI|)%(RfoX1bpwYB8&wH7f);(BMY31nVH-LvPtaT z|Cr_f5ulM)H&;8Fg*+0za9h}%A&6LlWgDGS#n}(g#-Q7uXQfoCeoeHUs*;#L*l$&0 z3TszjtDZ%BV`2_U4T}lv z6fG$8Bmoud?xt5Pda^hq2?yU=gRG{Yr_RxqPSL-FJyIO7ClMQrP#hz^e=J{!f+T*F!iP+Lre z6n;Rc5wU!i&mg0Cl@P3%{Uo@AM)u^6@VLyXtP;daA4gv)Rt|GcLdb6#C1Qxkqyz&6 zQ_V0N)RM6`3+3uBV;=qpYEGQ>f5fKzMU~Ga#F#%o@@XyiC5itDekev5i>ZH7n;_JT zeC1l0p%a+~B|eF^vb2k;KLDVGBKmxenhkw`7gP9d%qopt)1pZsTVy@rEzPy-s+ zD5|7z-=_$mxeD%xt77iF3$H`0PRVNdL8^JQ22K|m`SkFfJLqn#(JWh=^(!BtAps7X zB5l0+lYA|YPXI|(8w1mzU1mhcX2XF4&KI~N3JosEsGgDs_J?oC@s?o7MyznJ6JZAo zhW?6^L$ETZ&|IZc*D4|eMIDHhbR z;JM*dK4TCx-5x6xAE^U0hPgri*u|wvvFE-tT4Vkn(ql(?rkQr4eL~%6$(!A9*-0he zjd|bPEg>>z#{oe?1meb(5hNk_`9-mE5K|mNMwN=hq&P>>uq8%4y|i*HJ^!0ZYooH= zfE@cRR9AP1mxn*paZln}-JbiMRiTS5R)vi+Ym7XP@OeOpRXz)Qmcwl5;h7R^U^^%< z>D@r&AqQ~MV%4W9`BL(AVBqd^F{yQ#llMIeA}%Y>;hnxJZ&=Nd3L|@6`OXzcNJl%6 zs}k2$3-g5sjj1I=(Gc*~5;puhofFTMUife&-9E_XIyhc>-5-YG(tmOX%R9VPjKEg-VwB@cj)CBU7W3c@)|w^I8zpejl~ZMIwSmMbS5hEfrPS7@%w}Tx zT{gTMt-2L1NxX`5xuSkhP@BAX$m!4DFEf?32PEvVN$CZb=Z7KGt1#vrm2LaeK_oIX z2K{*wl?_klJN(E>QLv53w{fDEr78|JXdSZVS0Q58m!(eaLiVBi{tfTo6=CL;? zkM(B4*8oe+eaT*8`i+kjbd|E8sCTc^0ruvL9vvfRxo`>t4&uE?i7b%b&G5RoBA+-u z0E#a3XFl-Ui9+jUnxTa%Z-r!Yp1*X1LYGxk^($1 zIT;c0i7+P&KFhlK8b0W*sID*yH?dovnQ|z%r<`l^v~~Gb{o{cUG45wu&_~;|&LzTf zhZCiETfkaP9{BQv6L~mMH(1AujGNDc_&L3s58hqJ=6kp~=O3cgz>cbvE{|zZ9|^TY@z5@zJrUh4AM4@>ughYOSN6Be{pQ5_QnaJeS65_1!dbQi zK7?Cv#5OJ!?oS_>?b;QYh-shU;7Ev6*}=P9<-rM*eeaar(4L9pq_SzhbfzSlS>M7i zwWMzjl0&d`az53g-9^ZP9DUJVZ}kIr7ELyPA=5V;=!l=bzfn`4Xe^jj zi>=NgcYp*XMRw(A)w-&B!GHIwevMQZJb54{Z@3~w6H2e^T#D1K-$&u}JAO>;UgL1D z%KK+PmL1o=C4x&(+!Z9jMSM>bM6b*zp~Lh2MN6+zFbx(az2I`6F@KoSX@q5eEUU2} zP0p%|_K7t3l9mX=gH+8vbt1)KHjh{iPUe+iBe*BWQ!+w-mlqhi>iUHF0z!mEGR#@_ z?=CRjKzQMxv^V3$UNLP-GuMCp=_U$MA%z~b011)i)_4d)gDS~mtpu_8$l=Z7da@<< zsRKj|=pT`O{8a6YBxnQajFwiWOnR5C11kqG#XIQZr7B>z_~?dUqvS(ms6HbH>fH_F zfYrG3rkS{oQs++4wMA21-HmmbvYIG4Z|Nle76yE)fnaYJF! zD&arP~& zxCSXyOx4kSIL%O67v6s}YP*K6=Ocy9Pv7Aj2<LPX>h} z9KV-2PBHE(^=!W+5KfPY^hp^QEZ*fi8-(D2m03ogDoiozsP$NP!oAMoV^s7eVw-57 zNS{S3rpli;?_WH6`{>1_4ms;NPM*DRR3h^{(Cz2~(WP}C@G>C6#QfD6hF*2cJu1D` zMzfj7DZWRPD_OABFYZH|J-acmXa!q=M-F87uY1gn3FZ4iARvkgsmw)x?PaX4aQM$(>2%g`-zq-f8m z_*s=1V%Kb#Gl#cUbQT0jq?5d?Mv|BPb2R>$McLt06>@eM=q*0Iig`EzCim9H#MX_n_UZ%eHMXTY|{4X z{<{gh83~%)e0lv9Em|t4F0855IK@bkO0fZBY#f(7IeX=p@jFS8*L{MP3zjn((W}CM zenmq!7WU~6X8n3^eaPToU#G0F0;iR?&;7)ZBCPCo2&p-rP|rNmTTcT;CO)qZiNPsRDZ`A84NS*7Mo3-0JDp7_j$T!b8*ad!&j!&m@z<0JwR= z2*Z7sfdiL2ITLo`ypa%;IbA7WtrFq_kvkaLGFU~sF+s}BEEhASv=9&P_<&FcPhRA~ z|Govj3~BYh-HD)Gfa9at<--;Pu83#?Nj&Y01Zu{EWHDTRdU<|j=}(CLh9rDiq9yK} z;0JRLvSoy{_jv&{#eI}{MQQ9++41^V@oyv1ACk(!%%Y<(B+pVSMZOspPhf1#v8^!x z1UiPq_ovq12K}r>BT=!2OJx(Kc5HlJV5ZcygGn87O!c&*q2XClaHY@Fg7^PY;|g<@ zn;{~r<=mAj0*|V-mb20$LM|D5U32g|d9g7UT;QOe>H=k?I4}+v^a226sD|+5hR8U& z&}Ts4z;r^jm>otg-sbL#!St8e4R@SIUC|&@Ifz?>jEaYUm!y1}M;f58#oRFj3*eB6 zTx%dCbb@;4{4*F>pD7217R7*bCDC&ah-ASDb4=MuOU6h1td~q>4d3p-juO6ie~8_D z$9-yoclT%bl>FbpNPJmub;k@_x%!CpI4}qR;Df4r4NrGBIT{e!lLCa9M!ou=EQ);- z_8tMor(45K5(imi1&gFKOVKWi_Mb)C{Zd6udhG6sA44h37OiakoCwuZgV?n5W}=5R zv8l_o;*xq%Jrn?-PIw^!CD`dy^gW{rreg3i|m71BPvXKzxPk`}@(9fZF&R5l!;bK4d|ep+JBVxYTT-ToO6BZG`Z1p|Ams^x!0`?k%r!fK(jgZlGEi5$e^dUVG)IEN|2 zun;&lKv3~E2}9#FU5y%La=;ApvOS*BuGKS6FBvMJ;KgwqpzN zTGfBdKAx=fAJ-Ss?#uhW(Z&Ri*)jh0^d(}4p+c0K{__rU&ElShW%)=AGDbmc>U!=y z`h)5qNd#lycZp0$*^vw>%%vLUbzA#vCuEXf1MiCzRK~U7DY5+bx{<= z{^m`X6DA?D7Z|?07D@bb^GOVurNnLUxl@b9D?_4Pm0A+AdJum|iO)`DtqsIS3>mMO zo^+*g{)0feOis2kbOg72l(G*X-3Fj=s%QxkE;jL`LBboL{=oPb-{H%5!p5o*)BU{{ z)&5g?dImiS>rMj^2fXQ&Syh&4c(<3P)sp0Ex(wlmM)}rKn*HZ}92tsDNV70egBhe3 z6IYmoODRy5hfJJm2MTt)eRIiyO-2t>%s*6r-lEb|@J9A(LuKH1c7cDBSCylRA1db% zI~{#ClAn^bbD+DC82+tt;-$$hThuKx<`aNC}w7IjGE4Twg_Uw0V2|FwjBk+ zB!T}#WG6W_J;CJZHDZ|d?h_C15Yz|q_OzneNu2I?G0$9lmTF!y1*!DdS-&%U*l=S8 zA>*_MP8?}W5}D`2QlFjkDQp=o?3QNP?LcY5bcU@WMkTYB+?2ze#darK!lo6?`g$fC zl9_#mi(aiGGD*R#R^~GW@42O>^XgCgO?qfG>j&Ei@>!)R8U}(FcG&Kkc;26s<(s-6 z9fHL~Y?+vICdxS5gtGhq7p|o!#Vx|=n=C$=JT9voOSJL=){-EOr~PN19!dOuMA+@e zLXI&fo2v}5e?+C$L-Ou?5XkkhE0m^s`9+wE-N*Ekh*!Mb7_sp5l#tis-adnqGYf@Z z2sB@M&q6ALi&c8Zk7;D=H)tD!ZS~Ju&Nr1FHDUT=ZO84<)Us=XnDjjO-|#-5kMsD* zRC&?w@)pER&WWjpZ#QKoBrjB{lDvi$5;r}LEFM2*U_{w0y`Om2QsQu?9UOTcq%5r_ z<}0Mei>VZQPI0AdM>?fQbA`G0q(4f}E-J9T6~_Zgc7+nF2;f$U_6&|$KCQk;E+7Ci zl=!Ycr@-TjwZrz;cq|jJxztVvhcRf59e<&@>pn}2vUg)QT_4*U+Dqsh-x!({PHsn; z7`rH$u+OQd~ z`$|@<<$h-IxQm#DHkCRd(*7$RN{Vr2H3inf0I)a7R9>h z_*(si2~E%^?xEEC>w0dUm0!+=glogBVyTvRB>JLbyly=PsT&SQho#9nk(bZIIx7Mk zGf^5GKZv(8{lxBp%HC5hM`7yFjnQTRueskI}v#$3l)#Y@gRPmm1lrL_uEcbZc;sMV0!uuwOm2 z;Tdf-R9F;|tq~5G1CTEaNP5?)PJvyWGC`Kwj?xPtERh4e@!hc}JvG5|hrhF#vBEIN zg((knpkhUzWH(siTU7se>v|*&VO8>tBc98K;4y1=jmal}l_Ziq0OtV$OX8X@_a4W` z{9Ief?|kWPrT1=5@(#H>HSm1ZE6JN+tX57*=?w1Uh*<1BwlNU?yyl9?p{|dUZ@=Yw zwB@psA*kv5lLqjEYp%>d9=$P%Ou`4M6@q#-g&C2IJ(f<&DK0xPavN)S7XXi2@eGOP ztqPG^29y8K)0!HLhqlL8T56`esA_dMxV{OzkZbKHJ?-4^=`!<8QOWiqT@+qeJ~MsP z%cu7+vn4OL*-NsaPw^>vZ2DVHXKP%KC2w3?TqN%3il^_sMMQ&eWMR@=qq~VWx8y*7 zSvqfZ>ElnKp;1ePF{{<@W}n901$Zx4SHDTrI)`9{=Rjt2TbcmnV*wo^{~zZ(3{BAqRzuD> z@Y#k@r0_$Ul}SlsKu%t!EnErnxW~`zqM4wfy`NawJ(dc1{bt7;zuO2qL*(ps_%ZI( zTjuMh^&6%?ld=?^NKg{_RC3O2Rgq{2vlX{Xq15H2#e?`_mvP@iylDT&e zHzV@IDPKag<}F9Crc?ifs@bJU&S zasDA4{eM>dui)*!MQ2qx{!6d`rI)xx3LX!aH=rrVKb~RBiuLlsMFgCk zOr>N~xSxN$Jmq_tj!78~1L0>R%xB~_j-y>Ha2oY?1s3@&AJbh+#oNK{A^HC86J(>) zPZxedqpM?<*Rz}|k1!R{@`dQM!x=WOoSjRYDV&9!TTC1jB{S1vURp~twYc_t^rC?U z&VyS1EGc>!Pxo_ZNKgFu<~||r;Z~f5(s*eo-GQu|P$n8b4u!#;q}4MKS(0rJiJIAF z-3@iXBN3m0_T;3!%*d$o=fbKz>B