diff --git a/docs/notebooks/microsoft_agent_framework/microsoft_agent_framework_llama_stack_integration.ipynb b/docs/notebooks/microsoft_agent_framework/microsoft_agent_framework_llama_stack_integration.ipynb new file mode 100644 index 000000000..6bd5f4507 --- /dev/null +++ b/docs/notebooks/microsoft_agent_framework/microsoft_agent_framework_llama_stack_integration.ipynb @@ -0,0 +1,1024 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Microsoft Agent Framework + Llama Stack Integration\n", + "\n", + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/microsoft_agent_framework/microsoft_agent_framework_llama_stack_integration.ipynb)\n", + "\n", + "## Overview\n", + "\n", + "This notebook demonstrates how to use **Microsoft Agent Framework** (successor to AutoGen) with **Llama Stack** as the backend.\n", + "\n", + "> **Note:** This notebook uses Microsoft Agent Framework, which replaces AutoGen. For the migration guide, see: [Microsoft Agent Framework Migration Guide](https://learn.microsoft.com/en-us/agent-framework/migration-guide/from-autogen/)\n", + "\n", + "### Use Cases Covered:\n", + "1. **Simple ChatAgent** - Single agent task execution\n", + "2. **Sequential Workflow** - Round-robin multi-agent collaboration\n", + "3. **AgentThread** - Stateful multi-turn conversations\n", + "4. **Custom Workflow** - Data-flow with executors and feedback loops\n", + "5. **Concurrent Workflow** - Parallel agent processing\n", + "\n", + "---\n", + "\n", + "## Prerequisites\n", + "\n", + "```bash\n", + "# Install Microsoft Agent Framework\n", + "pip install agent-framework\n", + "\n", + "# Llama Stack should already be running\n", + "# Default: http://localhost:8321\n", + "```\n", + "\n", + "**Migration Note:** If you're migrating from AutoGen, the main changes are:\n", + "- Package: `autogen-*` → `agent-framework`\n", + "- Client: `OpenAIChatCompletionClient` → `OpenAIResponsesClient` or `OpenAIChatClient`\n", + "- Client: `AzureOpenAIChatCompletionClient` → `AzureOpenAIResponsesClient` or `AzureOpenAIChatClient`\n", + "- Agent: `AssistantAgent` → `ChatAgent`\n", + "- Team: `RoundRobinGroupChat` → `SequentialBuilder`" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Microsoft Agent Framework imports successful\n", + "Using Microsoft Agent Framework (successor to AutoGen)\n", + "✅ Llama Stack is running at http://localhost:8321\n", + "Status: 200\n" + ] + } + ], + "source": [ + "# Imports for Microsoft Agent Framework\n", + "import os\n", + "import asyncio\n", + "from agent_framework import ChatAgent\n", + "from agent_framework.openai import OpenAIResponsesClient\n", + "\n", + "print(\"✅ Microsoft Agent Framework imports successful\")\n", + "print(\"Using Microsoft Agent Framework (successor to AutoGen)\")\n", + "\n", + "# Check Llama Stack connectivity\n", + "import httpx\n", + "\n", + "LLAMA_STACK_URL = \"http://localhost:8321\"\n", + "\n", + "try:\n", + " response = httpx.get(f\"{LLAMA_STACK_URL}/v1/models\")\n", + " print(f\"✅ Llama Stack is running at {LLAMA_STACK_URL}\")\n", + " print(f\"Status: {response.status_code}\")\n", + "except Exception as e:\n", + " print(f\"❌ Llama Stack not accessible: {e}\")\n", + " print(\"Make sure Llama Stack is running on port 8321\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Configuration: Microsoft Agent Framework with Llama Stack\n", + "\n", + "### How It Works\n", + "\n", + "Microsoft Agent Framework uses **OpenAIResponsesClient** to connect to OpenAI-compatible servers like Llama Stack.\n", + "\n", + "**Key Changes from AutoGen:**\n", + "- `OpenAIChatCompletionClient` → `OpenAIResponsesClient`\n", + "- Team-based architecture (similar to AutoGen v0.7.5)\n", + "- Async/await pattern for running tasks" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Model client configured for Llama Stack\n", + "Model: ollama/llama3.3:70b\n", + "Base URL: http://localhost:8321/v1\n", + "Client type: OpenAIResponsesClient\n" + ] + } + ], + "source": [ + "# Create OpenAI Responses Client for Llama Stack\n", + "# Uses the /responses API (specialized for reasoning models)\n", + "chat_client = OpenAIResponsesClient(\n", + " model_id=\"ollama/llama3.3:70b\", # Choose any other model of your choice\n", + " api_key=\"not-needed\",\n", + " base_url=\"http://localhost:8321/v1\" # Llama Stack OpenAI-compatible endpoint\n", + ")\n", + "\n", + "print(\"✅ Model client configured for Llama Stack\")\n", + "print(f\"Model: ollama/llama3.3:70b\")\n", + "print(f\"Base URL: http://localhost:8321/v1\")\n", + "print(f\"Client type: OpenAIResponsesClient\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 1: Simple Task with ChatAgent\n", + "\n", + "### Pattern: Single Agent Task\n", + "\n", + "Microsoft Agent Framework uses **ChatAgent** to create AI assistants powered by your model.\n", + "\n", + "**ChatAgent Features:**\n", + "- Multi-turn by default (keeps calling tools until complete)\n", + "- Stateless (use `AgentThread` for conversation history)\n", + "- Configured with `instructions` (replaces AutoGen's `system_message`)\n", + "- Can be created directly or via client factory method\n", + "\n", + "### Use Case: Solve a Math Problem" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Agent created: MathAssistant\n", + "\n", + "==================================================\n", + "Task Result:\n", + "To find the sum of the first 10 prime numbers, we need to follow these steps:\n", + "\n", + "**Step 1: Identify the first 10 prime numbers**\n", + "\n", + "A prime number is a positive integer that is divisible only by itself and 1. We will list out the first few prime numbers until we have 10:\n", + "2, 3, 5, 7, 11, 13, 17, 19, 23, 29\n", + "\n", + "These are the first 10 prime numbers.\n", + "\n", + "**Step 2: Add up the prime numbers**\n", + "\n", + "Now, we simply need to add these numbers together:\n", + "2 + 3 = 5\n", + "5 + 5 = 10\n", + "10 + 7 = 17\n", + "17 + 11 = 28\n", + "28 + 13 = 41\n", + "41 + 17 = 58\n", + "58 + 19 = 77\n", + "77 + 23 = 100\n", + "100 + 29 = 129\n", + "\n", + "Therefore, the sum of the first 10 prime numbers is **129**.\n", + "\n", + "So, to summarize:\n", + "The first 10 prime numbers are: 2, 3, 5, 7, 11, 13, 17, 19, 23, and 29.\n", + "Their sum is: 2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 = **129**.\n", + "==================================================\n" + ] + } + ], + "source": [ + "import asyncio\n", + "\n", + "# Method 1: Direct creation\n", + "assistant = ChatAgent(\n", + " name=\"MathAssistant\",\n", + " chat_client=chat_client,\n", + " instructions=\"You are a helpful AI assistant that solves math problems. Provide clear explanations and show your work.\"\n", + ")\n", + "\n", + "print(\"✅ Agent created:\", assistant.name)\n", + "\n", + "# Define the task\n", + "task = \"What is the sum of the first 10 prime numbers? Please calculate it step by step.\"\n", + "\n", + "# Run the task (Agent Framework uses async)\n", + "# Note: ChatAgent is stateless - no conversation history between calls\n", + "result = await assistant.run(task)\n", + "\n", + "print(\"\\n\" + \"=\"*50)\n", + "print(\"Task Result:\")\n", + "print(result.text if result.text else \"No response\")\n", + "print(\"=\"*50)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 2: Multi-Agent Team Collaboration\n", + "\n", + "### Pattern: Sequential Workflow (Round-Robin Style)\n", + "\n", + "Agent Framework uses **SequentialBuilder** to create workflows where agents take turns.\n", + "\n", + "**Key Concepts:**\n", + "- `SequentialBuilder`: Agents process messages sequentially\n", + "- Shared conversation history across all agents\n", + "- Each agent sees all previous messages\n", + "\n", + "### Use Case: Write a Technical Blog Post" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Team agents created: Researcher, Writer, Critic\n", + "\n", + "==================================================\n", + "Running Sequential Workflow:\n", + "==================================================\n", + "\n", + "Turn 1 [user]:\n", + "Write a 200-word blog post about the benefits of using Llama Stack for LLM applications.\n", + "\n", + "Topic: Benefits of Llama Stack for LLM applications\n", + "Target length: 200 words\n", + "Audience: Developers and technical decision-makers\n", + "\n", + "\n", + "Turn 2 [assistant - Researcher]:\n", + "* Llama Stack is an open-source framework for building LLM applications\n", + "* Improves model performance with optimized algorithms and hyperparameters\n", + "* Supports multiple frameworks, including PyTorch and TensorFlow\n", + "* Reduces development time by up to 30% with pre-built components\n", + "* Enhances scalability with distributed training capabilities\n", + "* Provides seamless integration with popular libraries and tools\n", + "\n", + "Research complete. Passing to Writer.\n", + "\n", + "Turn 3 [assistant - Writer]:\n", + " \n", + "\n", + "### Introduction to Llama Stack \n", + "The Llama Stack is designed to simplify the development of LLM applications, providing numerous benefits for developers and technical decision-makers.\n", + "\n", + "### Benefits of Using Llama Stack\n", + "Using Llama Stack offers several advantages, including improved model performance, reduced development time, and enhanced scalability. With its optimized algorithms and hyperparameters, Llama Stack enables developers to build high-performing LLM models. The framework's pre-built components and support for multiple frameworks reduce development time, allowing developers to focus on other aspects of their project.\n", + "\n", + "By leveraging Llama Stack, developers can streamline their workflow, improve model accuracy, and deploy LLM applications more efficiently. Whether building chatbots, language translators, or text generators, Llama Stack provides a robust foundation for developing innovative LLM applications.\n", + "Draft complete. Passing to Critic.\n", + "\n", + "Turn 4 [assistant - Critic]:\n", + " \n", + "\n", + "**Review:**\n", + "1. The blog post jumps abruptly from introducing Llama Stack to listing its benefits without providing a clear connection between the two sections. Consider adding a transitional sentence or phrase to guide the reader more smoothly through the text.\n", + "2. While the benefits of using Llama Stack are mentioned, such as improved model performance and reduced development time, specific examples or case studies that illustrate these benefits would make the content more engaging and concrete for the audience.\n", + "3. The tone of the blog post is quite formal, which may suit technical decision-makers but could be more approachable for a broader developer audience. Incorporating more conversational language or anecdotes about the challenges of LLM development and how Llama Stack addresses them might enhance readability and appeal.\n" + ] + } + ], + "source": [ + "from agent_framework import SequentialBuilder, WorkflowOutputEvent\n", + "\n", + "# Create specialist agents with very strict role separation\n", + "researcher = ChatAgent(\n", + " name=\"Researcher\",\n", + " chat_client=chat_client,\n", + " instructions=\"\"\"You are a researcher. Your ONLY job is to gather facts, statistics, and key information.\n", + " \n", + " DO:\n", + " - Provide bullet points of facts and key information\n", + " - Include relevant statistics if available\n", + " - Keep it concise (50-100 words max)\n", + " \n", + " DO NOT:\n", + " - Write full paragraphs or blog posts\n", + " - Act as a writer or editor\n", + " - Provide any writing beyond factual bullet points\n", + " \n", + " End your response by saying: \"Research complete. Passing to Writer.\"\n", + " \"\"\"\n", + ")\n", + "\n", + "writer = ChatAgent(\n", + " name=\"Writer\",\n", + " chat_client=chat_client,\n", + " instructions=\"\"\"You are a technical writer. Your ONLY job is to take research and write a blog post.\n", + " \n", + " DO:\n", + " - Use the research provided by the Researcher\n", + " - Write a clear, engaging 200-word blog post\n", + " - Use proper formatting (headers, paragraphs)\n", + " - Focus on benefits and value\n", + " \n", + " DO NOT:\n", + " - Do research yourself\n", + " - Review or critique your own work\n", + " - Act as an editor or critic\n", + " \n", + " End your response by saying: \"Draft complete. Passing to Critic.\"\n", + " \"\"\"\n", + ")\n", + "\n", + "critic = ChatAgent(\n", + " name=\"Critic\",\n", + " chat_client=chat_client,\n", + " instructions=\"\"\"You are an editor and critic. Your ONLY job is to review the blog post written by the Writer.\n", + " \n", + " DO:\n", + " - Review the blog post for clarity, accuracy, and engagement\n", + " - Provide 3-5 specific, constructive suggestions for improvement\n", + " - Comment on structure, tone, and effectiveness\n", + " - Be constructive but honest\n", + " \n", + " DO NOT:\n", + " - Rewrite the blog post yourself\n", + " - Do research or writing\n", + " - Say \"looks good\" without providing specific feedback\n", + " \n", + " Provide your review in this format:\n", + " **Review:**\n", + " 1. [Suggestion 1]\n", + " 2. [Suggestion 2]\n", + " 3. [Suggestion 3]\n", + " \"\"\"\n", + ")\n", + "\n", + "print(\"✅ Team agents created: Researcher, Writer, Critic\")\n", + "\n", + "# Create a sequential workflow (round-robin collaboration)\n", + "workflow = SequentialBuilder().participants([researcher, writer, critic]).build()\n", + "\n", + "# Simpler task that doesn't list all the steps (to avoid confusion)\n", + "task = \"\"\"Write a 200-word blog post about the benefits of using Llama Stack for LLM applications.\n", + "\n", + "Topic: Benefits of Llama Stack for LLM applications\n", + "Target length: 200 words\n", + "Audience: Developers and technical decision-makers\n", + "\"\"\"\n", + "\n", + "print(\"\\n\" + \"=\"*50)\n", + "print(\"Running Sequential Workflow:\")\n", + "print(\"=\"*50)\n", + "\n", + "# Run the workflow and display results with agent names\n", + "async for event in workflow.run_stream(task):\n", + " if isinstance(event, WorkflowOutputEvent):\n", + " conversation_history = event.data\n", + " \n", + " # Map assistant messages to agent names using position\n", + " agent_names = [\"Researcher\", \"Writer\", \"Critic\"]\n", + " turn = 1\n", + " assistant_count = 0\n", + " \n", + " for msg in conversation_history:\n", + " # Normalize role to string for comparison (msg.role is a Role enum)\n", + " role_str = str(msg.role).lower().strip()\n", + " \n", + " # Determine the speaker label\n", + " if role_str == \"user\":\n", + " speaker = \"user\"\n", + " elif role_str == \"assistant\":\n", + " if assistant_count < len(agent_names):\n", + " speaker = f\"assistant - {agent_names[assistant_count]}\"\n", + " else:\n", + " speaker = \"assistant\"\n", + " assistant_count += 1\n", + " else:\n", + " speaker = str(msg.role)\n", + " \n", + " # Display the message\n", + " print(f\"\\nTurn {turn} [{speaker}]:\")\n", + " print(msg.text[:1000] + \"...\" if len(msg.text or \"\") > 1000 else msg.text)\n", + " turn += 1\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 3: Multi-Turn Conversations with AgentThread\n", + "\n", + "### Pattern: Stateful Conversations\n", + "\n", + "Unlike AutoGen, `ChatAgent` is **stateless by default**. To maintain conversation history across multiple interactions, use **AgentThread**.\n", + "\n", + "**AgentThread Features:**\n", + "- Stores conversation history\n", + "- Allows context to carry across multiple `agent.run()` calls\n", + "\n", + "### Use Case: Interactive Analysis" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Analyst agent created\n", + "\n", + "==================================================\n", + "Multi-Turn Conversation with Thread:\n", + "==================================================\n", + "\n", + "[Turn 1 - Initial Analysis]:\n", + "**Introduction to Local LLMs vs Cloud-Based APIs**\n", + "\n", + "The rise of Large Language Models (LLMs) has revolutionized natural language processing, offering unparalleled capabilities in text generation, comprehension, and analysis. Users can access these powerful models via two primary avenues: local deployment or cloud-based Application Programming Interfaces (APIs). Each approach presents distinct adva...\n", + "\n", + "[Turn 2 - Follow-up on Cost]:\n", + "**Cost Implications: Local LLMs vs Cloud-Based APIs**\n", + "\n", + "When evaluating the cost implications of local LLMs versus cloud-based APIs, several factors come into play. These include initial investment, ongoing expenses, scalability costs, and potential savings. Each approach has distinct cost characteristics that can significantly impact an organization's budget and ROI (Return on Investment).\n", + "\n", + "### **...\n", + "\n", + "[Turn 3 - Summary]:\n", + "I recommend choosing between local LLM deployment and cloud-based APIs based on a careful consideration of factors such as data sensitivity, scalability needs, budget constraints, and the importance of customization and control, with local deployment suitable for high-security applications and cloud-based APIs ideal for scalable, cost-efficient solutions with lower security demands.\n", + "\n", + "==================================================\n", + "Thread maintained context across 3 turns\n", + "==================================================\n" + ] + } + ], + "source": [ + "# Create an analyst agent\n", + "analyst = ChatAgent(\n", + " name=\"TechAnalyst\",\n", + " chat_client=chat_client,\n", + " instructions=\"\"\"You are a technical analyst. Analyze technical topics deeply:\n", + " 1. Break down complex concepts\n", + " 2. Identify pros and cons\n", + " 3. Provide recommendations\n", + " \"\"\"\n", + ")\n", + "\n", + "print(\"✅ Analyst agent created\")\n", + "\n", + "# Create a new thread to maintain conversation state\n", + "thread = analyst.get_new_thread()\n", + "\n", + "print(\"\\n\" + \"=\"*50)\n", + "print(\"Multi-Turn Conversation with Thread:\")\n", + "print(\"=\"*50)\n", + "\n", + "# First interaction\n", + "result1 = await analyst.run(\n", + " \"Analyze the trade-offs between using local LLMs versus cloud-based APIs.\",\n", + " thread=thread\n", + ")\n", + "print(\"\\n[Turn 1 - Initial Analysis]:\")\n", + "print(result1.text[:400] + \"...\" if len(result1.text or \"\") > 400 else result1.text)\n", + "\n", + "# Second interaction - builds on previous context\n", + "result2 = await analyst.run(\n", + " \"What about cost implications specifically?\",\n", + " thread=thread\n", + ")\n", + "print(\"\\n[Turn 2 - Follow-up on Cost]:\")\n", + "print(result2.text[:400] + \"...\" if len(result2.text or \"\") > 400 else result2.text)\n", + "\n", + "# Third interaction - continues the conversation\n", + "result3 = await analyst.run(\n", + " \"Summarize your recommendation in one sentence.\",\n", + " thread=thread\n", + ")\n", + "print(\"\\n[Turn 3 - Summary]:\")\n", + "print(result3.text)\n", + "\n", + "print(\"\\n\" + \"=\"*50)\n", + "print(f\"Thread maintained context across {3} turns\")\n", + "print(\"=\"*50)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 4: Advanced Workflow with Custom Executors\n", + "\n", + "### Pattern: Data-Flow Workflow with Code Review Loop\n", + "\n", + "Agent Framework's **Workflow** enables complex orchestration using executors and edges.\n", + "Unlike AutoGen's event-driven model, workflows use **data-flow** architecture.\n", + "\n", + "**Key Concepts:**\n", + "- `Executor`: Processing units (agents, functions, or sub-workflows)\n", + "- `WorkflowBuilder`: Build typed data-flow graphs\n", + "- `@executor` decorator: Define custom processing logic\n", + "- Edges route messages between executors\n", + "\n", + "### Use Case: Iterative Code Review Until Approved" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Code review team created with strict instructions\n", + "\n", + "============================================================\n", + "Code Review Workflow (with iteration tracking):\n", + "============================================================\n", + "\n", + " [Developer - Iteration 1]\n", + " Code submitted for review (preview): ```python\n", + "import re\n", + "\n", + "def validate_email(email: str) -> bool:\n", + " \"\"\"\n", + " Validat...\n", + "\n", + " [Reviewer - Iteration 1]\n", + " Decision: ❌ NEEDS REVISION\n", + " Sending feedback to developer...\n", + "\n", + " [Developer - Iteration 2]\n", + " Code submitted for review (preview): ```python\n", + "import re\n", + "\n", + "# Define a regular expression pattern for email validation ...\n", + "\n", + " [Reviewer - Iteration 2]\n", + " Decision: ❌ NEEDS REVISION\n", + " Sending feedback to developer...\n", + "\n", + " [Developer - Iteration 3]\n", + " Code submitted for review (preview): ```python\n", + "import re\n", + "\n", + "# Define a regular expression pattern for email validation ...\n", + "\n", + " [Reviewer - Iteration 3]\n", + " Decision: ❌ NEEDS REVISION\n", + " Sending feedback to developer...\n", + "\n", + " [Developer - Iteration 4]\n", + " Code submitted for review (preview): ```python\n", + "import re\n", + "import logging\n", + "\n", + "# Define constants for email validation\n", + "EMAI...\n", + "\n", + " [Reviewer - Iteration 4]\n", + " Decision: ❌ NEEDS REVISION\n", + "\n", + "============================================================\n", + "FINAL RESULT:\n", + "============================================================\n", + "⚠️ MAX ITERATIONS REACHED (4)\n", + "\n", + "📝 Last Review:\n", + "NEEDS REVISION: \n", + "The code provided has several areas that need improvement. Here are the specific issues:\n", + "\n", + "1. **Input Validation**: The `validate_input` function checks if the input email is a string and raises a TypeError if not. However, in the example usage, there's an attempt to call `validate_email(None)`. When this happens, the code does indeed raise an error, but it would be more Pythonic to explicitly check for `None` before calling `validate_input`, since `isinstance(None, str)` is False.\n", + "\n", + "2. **Internationalized Domain Name (IDN) Handling**: The code logs a warning when an IDN is detected but doesn't further validate the domain name according to IDN rules. It would be more robust to actually check if the domain can be encoded in ASCII using Punycode for validation purposes, rather than just logging a warning.\n", + "\n", + "3. **Error Handling in `validate_email_format`**: This function catches `re.error` exceptions but then immediately raises them again. If the intention is to log the error before re-raising it, this code achieves that. However, if not handling these errors further (e.g., providing a default return value or additional processing), consider removing the try-except block since it does not add any functionality beyond what's already built into Python.\n", + "\n", + "4. **Logging**: The code uses `logging.error` and `logging.warning` to log various events but does not configure logging anywhere in the provided snippet. Ensure that logging is configured appropriately at the application level, including setting a logging level and handlers for where log messages should be sent (e.g., console, file).\n", + "\n", + "5. **Type Hints for Exception Returns**: While type hints are used for function arguments and return types, consider adding them for raised exceptions as well to make the code's behavior more explicit.\n", + "\n", + "6. **Redundant Pattern Matching**: The `validate_email_format` function first checks if an email is non-ASCII before attempting a pattern match with `EMAIL_PATTERN`. However, since `EMAIL_PATTERN` only matches ASCII characters (due to its structure), the check for ASCII can be omitted as it's implicitly required by the regular expression. \n", + "\n", + "7. **Potential Regular Expression Denial of Service (ReDoS)**: Regular expressions are vulnerable to ReDoS when certain patterns cause catastrophic backtracking, leading to exponential time complexity in matching strings. While `EMAIL_PATTERN` seems safe, consider using more efficient validation methods or libraries specifically designed for email address validation if possible.\n", + "\n", + "8. **Comment on Edge Cases**: The code attempts to validate common email formats but notes it might not cover all edge cases according to the official standard (RFC 5322). It would be beneficial to document which specific edge cases this implementation does not handle to set expectations for its usage and limitations. \n", + "\n", + "9. **Testing Coverage**: Although there's an example usage with a range of test emails, consider using a unit testing framework like `unittest` to write and run structured tests against the validation functions. This approach ensures more comprehensive coverage of potential inputs and makes it easier to identify regressions if the code is modified in the future.\n", + "\n", + "10. **Consider Using Existing Libraries**: Email address validation can be complex, especially when considering international domain names and all possible valid formats according to RFCs. Consider using an existing library designed specifically for this purpose, like `email-validator`, which might offer more comprehensive and reliable validation capabilities than a custom implementation.\n", + "\n", + "💻 Last Code:\n", + "```python\n", + "import re\n", + "import logging\n", + "\n", + "# Define constants for email validation\n", + "EMAIL_PATTERN = r\"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$\"\n", + "MAX_EMAIL_LENGTH = 254 # Maximum allowed length of an email address\n", + "IDN_PATTERN = r\"^xn--.*$\" # Pattern to match internationalized domain names (IDNs)\n", + "\n", + "def validate_input(email: str) -> None:\n", + " \"\"\"\n", + " Validate input type and emptiness.\n", + "\n", + " Args:\n", + " email (str): The email address to validate.\n", + "\n", + " Raises:\n", + " TypeError: If the input email is not a string.\n", + " ValueError: If the input email is empty or exceeds the maximum allowed length.\n", + " \"\"\"\n", + "\n", + " # Check if email is a string\n", + " if not isinstance(email, str):\n", + " raise TypeError(\"Input must be a string\")\n", + "\n", + " # Trim leading/trailing whitespace from the input email\n", + " email = email.strip()\n", + "\n", + " # Check if email is empty after trimming or exceeds the maximum length\n", + " if not email or len(email) > MAX_EMAIL_LENGTH:\n", + " raise ValueError(f\"Email '{email}' cannot be empty and must not exceed {MAX_EMAIL_LENGTH} characters\")\n", + "\n", + "def validate_email_format(email: str) -> bool:\n", + " \"\"\"\n", + " Validate the format of an email address.\n", + "\n", + " An email address is considered valid if it matches the standard format of local-part@domain.\n", + " The local-part may contain letters (a-z, A-Z), numbers (0-9), and special characters (. _ % + -).\n", + " The domain must be at least two parts separated by a dot, with each part containing letters (a-z, A-Z), numbers (0-9), and hyphens (-).\n", + "\n", + " Note: This function does not check if the email address actually exists or if the domain is valid according to the official standard (RFC 5322).\n", + " It implements a simplified version of the standard that supports common email address formats but may not cover all edge cases.\n", + "\n", + " Args:\n", + " email (str): The email address to validate.\n", + "\n", + " Returns:\n", + " bool: True if the email format is valid, False otherwise.\n", + " \"\"\"\n", + "\n", + " # Check if the email contains non-ASCII characters\n", + " if not email.isascii():\n", + " logging.error(\"Email contains non-ASCII characters\")\n", + " return False\n", + "\n", + " try:\n", + " # Check if the email matches the pattern\n", + " if re.match(EMAIL_PATTERN, email):\n", + " # Additional check for internationalized domain names (IDNs)\n", + " domain = email.split('@')[1]\n", + " if IDN_PATTERN.match(domain):\n", + " logging.warning(\"Internationalized domain name detected\")\n", + " return True\n", + " else:\n", + " return False\n", + " except re.error as e:\n", + " logging.error(f\"Error occurred during regular expression matching: {e}\")\n", + " raise\n", + "\n", + "def validate_email(email: str) -> bool:\n", + " \"\"\"\n", + " Validate an email address.\n", + "\n", + " Args:\n", + " email (str): The email address to validate.\n", + "\n", + " Returns:\n", + " bool: True if the email is valid, False otherwise.\n", + "\n", + " Raises:\n", + " TypeError: If the input email is not a string.\n", + " ValueError: If the input email is empty or exceeds the maximum allowed length.\n", + " \"\"\"\n", + "\n", + " validate_input(email)\n", + " return validate_email_format(email)\n", + "\n", + "# Example usage and edge case testing\n", + "if __name__ == \"__main__\":\n", + " emails = [\n", + " \"test@example.com\",\n", + " \"invalid_email\",\n", + " \"test@.com\",\n", + " \" test@example.com\",\n", + " \"test.example.com\",\n", + " None,\n", + " \"\",\n", + " \"verylongdomainname123456789012345678901234567890@example.com\",\n", + " \"special.chars(email)-test@example.co.uk\",\n", + " \"test@example.co.uk..uk\"\n", + " ]\n", + "\n", + " for email in emails:\n", + " try:\n", + " print(f\"Email: {email}, Valid: {validate_email(email)}\")\n", + " except (TypeError, ValueError) as e:\n", + " if email is None:\n", + " print(\"Email: None, Error: Input must be a string\")\n", + " else:\n", + " print(f\"Email: {email}, Error: {e}\")\n", + "```\n" + ] + } + ], + "source": [ + "from agent_framework import WorkflowBuilder, executor, WorkflowContext, WorkflowOutputEvent\n", + "from typing_extensions import Never\n", + "\n", + "# Create code review agents with better instructions\n", + "code_developer = ChatAgent(\n", + " name=\"Developer\",\n", + " chat_client=chat_client,\n", + " instructions=\"\"\"You are a developer. When you receive code review feedback:\n", + " - Address ALL issues mentioned\n", + " - Explain your changes briefly\n", + " - Present ONLY the improved code (no extra commentary)\n", + "\n", + " If no feedback is given, present your initial implementation.\"\"\"\n", + ")\n", + "\n", + "code_reviewer = ChatAgent(\n", + " name=\"CodeReviewer\",\n", + " chat_client=chat_client,\n", + " instructions=\"\"\"You are a senior code reviewer. Review code for bugs, performance, security, and best practices.\n", + "\n", + " CRITICAL: Your response MUST start with one of these:\n", + "\n", + " If code is production-ready:\n", + " \"APPROVED: [brief reason why it's good]\"\n", + "\n", + " If code needs changes:\n", + " \"NEEDS REVISION: [list specific issues to fix]\"\n", + "\n", + " DO NOT provide fixed code examples.\n", + " DO NOT say LGTM or APPROVED unless the code is truly ready.\n", + " Be constructive but strict.\"\"\"\n", + ")\n", + "\n", + "print(\"✅ Code review team created with strict instructions\")\n", + "\n", + "# Track iterations\n", + "review_state = {\"iteration\": 0, \"max_iterations\": 4}\n", + "\n", + "# Define custom executors for workflow\n", + "@executor(id=\"developer\")\n", + "async def developer_executor(task: str, ctx: WorkflowContext[str]) -> None:\n", + " \"\"\"Developer creates or improves code based on input.\"\"\"\n", + " review_state[\"iteration\"] += 1\n", + " print(f\"\\n [Developer - Iteration {review_state['iteration']}]\")\n", + "\n", + " result = await code_developer.run(task)\n", + " print(f\" Code submitted for review (preview): {result.text[:80]}...\")\n", + " await ctx.send_message(result.text)\n", + "\n", + "@executor(id=\"reviewer\")\n", + "async def reviewer_executor(code: str, ctx: WorkflowContext[str, str]) -> None:\n", + " \"\"\"Reviewer checks code and either approves or requests changes.\"\"\"\n", + " print(f\"\\n [Reviewer - Iteration {review_state['iteration']}]\")\n", + "\n", + " result = await code_reviewer.run(f\"Review this code:\\n\\n{code}\")\n", + "\n", + " # Smart approval detection - check the START of the response\n", + " response_start = result.text[:100].upper() # First 100 chars only\n", + " is_approved = response_start.startswith(\"APPROVED\")\n", + " needs_revision = \"NEEDS REVISION\" in response_start\n", + "\n", + " print(f\" Decision: {'✅ APPROVED' if is_approved else '❌ NEEDS REVISION' if needs_revision else '⚠️ UNCLEAR'}\")\n", + "\n", + " if is_approved:\n", + " # Code approved! Output final result\n", + " await ctx.yield_output(\n", + " f\"✅ APPROVED after {review_state['iteration']} iteration(s)\\n\\n\"\n", + " f\"📝 Review Comments:\\n{result.text}\\n\\n\"\n", + " f\"💻 Final Code:\\n{code}\"\n", + " )\n", + " elif review_state[\"iteration\"] >= review_state[\"max_iterations\"]:\n", + " # Hit max iterations - force stop\n", + " await ctx.yield_output(\n", + " f\"⚠️ MAX ITERATIONS REACHED ({review_state['max_iterations']})\\n\\n\"\n", + " f\"📝 Last Review:\\n{result.text}\\n\\n\"\n", + " f\"💻 Last Code:\\n{code}\"\n", + " )\n", + " else:\n", + " # Send feedback back to developer for revision\n", + " print(f\" Sending feedback to developer...\")\n", + " await ctx.send_message(\n", + " f\"FEEDBACK FROM REVIEWER:\\n{result.text}\\n\\nPrevious code:\\n{code}\",\n", + " target_id=\"developer\"\n", + " )\n", + "\n", + "# Build workflow: developer → reviewer (with feedback loop)\n", + "workflow = (\n", + " WorkflowBuilder()\n", + " .add_edge(developer_executor, reviewer_executor)\n", + " .add_edge(reviewer_executor, developer_executor) # Feedback loop\n", + " .set_start_executor(developer_executor)\n", + " .build()\n", + ")\n", + "\n", + "# Use a task that's more likely to need multiple iterations\n", + "task = \"\"\"Implement a Python function to validate email addresses with these requirements:\n", + "- Must have @ symbol\n", + "- Must have domain with at least one dot\n", + "- No spaces allowed\n", + "- Handle edge cases\n", + "- Include basic error handling\n", + "Keep it simple but correct.\"\"\"\n", + "\n", + "print(\"\\n\" + \"=\"*60)\n", + "print(\"Code Review Workflow (with iteration tracking):\")\n", + "print(\"=\"*60)\n", + "\n", + "# Reset state\n", + "review_state[\"iteration\"] = 0\n", + "\n", + "# Run workflow with streaming\n", + "async for event in workflow.run_stream(task):\n", + " if isinstance(event, WorkflowOutputEvent):\n", + " print(\"\\n\" + \"=\"*60)\n", + " print(\"FINAL RESULT:\")\n", + " print(\"=\"*60)\n", + " print(event.data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Example 5: Concurrent Workflow Pattern\n", + "\n", + "### Pattern: Parallel Processing\n", + "\n", + "Agent Framework's **ConcurrentBuilder** enables parallel agent execution.\n", + "All agents process the input simultaneously and results are aggregated.\n", + "\n", + "### Use Case: Multi-Perspective Analysis" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "✅ Analyst team created: Technical, Business, Security\n", + "\n", + "==================================================\n", + "Concurrent Analysis (Parallel Processing):\n", + "==================================================\n", + "\n", + "[Analysis 1]:\n", + "Evaluate the proposal to deploy a customer service chatbot.\n", + "--------------------------------------------------\n", + "\n", + "[Analysis 2]:\n", + "**Proposal Evaluation: Customer Service Chatbot Deployment**\n", + "\n", + "**Introduction:**\n", + "The proposed project aims to deploy a customer service chatbot to enhance the user experience, reduce support queries, and increase efficiency. This evaluation assesses the technical feasibility and implementation complexity of the proposal.\n", + "\n", + "**Technical Feasibility:**\n", + "\n", + "1. **Platform Compatibility:** The chatbot can be integrated with various platforms, including websites, mobile apps, and social media messaging services.\n", + "2. **Natural Language Processing (NLP):** The proposed NLP engine is capable of understanding and processing human language, allowing for effective conversation flow.\n", + "3. **Integration with Existing Systems:** The chatbot can be integrated with the company's CRM system to access customer data and provide personalized support.\n", + "4. **Security:** The proposed solution includes robust security measures, such as encryption and secure authentication protocols, to ensure data protection.\n", + "\n", + "**Technic...\n", + "--------------------------------------------------\n", + "\n", + "[Analysis 3]:\n", + "**Proposal Evaluation: Deploying a Customer Service Chatbot**\n", + "\n", + "**Introduction**\n", + "\n", + "The proposal aims to deploy a customer service chatbot to enhance customer experience, reduce support queries, and improve operational efficiency. This evaluation will assess the business value, ROI, and market impact of the proposed solution.\n", + "\n", + "**Business Case**\n", + "\n", + "1. **Problem Statement**: The current customer support process is manual, time-consuming, and leads to long wait times, resulting in decreased customer satisfaction.\n", + "2. **Solution Overview**: Implement a conversational AI-powered chatbot that provides 24/7 support, automates routine inquiries, and directs complex issues to human agents.\n", + "3. **Key Benefits**:\n", + "\t* Improved response times (reduced average handle time by 30%)\n", + "\t* Enhanced customer experience (increased satisfaction ratings by 20%)\n", + "\t* Reduced support queries (decreased ticket volume by 25%)\n", + "\t* Cost savings (lowered labor costs by 15%)\n", + "4. **Target Audience**: Existing customers, new custom...\n", + "--------------------------------------------------\n", + "\n", + "[Analysis 4]:\n", + "**Proposal Evaluation: Customer Service Chatbot Deployment**\n", + "\n", + "**Overview**\n", + "The proposal to deploy a customer service chatbot aims to enhance customer experience, improve response times, and reduce support costs. The chatbot will utilize natural language processing (NLP) and machine learning algorithms to provide automated support for frequently asked questions, routing complex issues to human representatives.\n", + "\n", + "**Security Implications:**\n", + "\n", + "1. **Data Protection**: The chatbot will collect and process sensitive customer data, including personal identifiable information (PII), payment details, and conversation history. Ensure that the chatbot's data storage and transmission protocols comply with relevant regulations, such as GDPR, HIPAA, or PCI-DSS.\n", + "2. **Authentication and Authorization**: Implement robust authentication and authorization mechanisms to prevent unauthorized access to customer data and ensure that only authorized personnel can modify chatbot configurations or access sensitive...\n", + "--------------------------------------------------\n", + "\n", + "==================================================\n", + "All agents completed in parallel\n", + "==================================================\n" + ] + } + ], + "source": [ + "from agent_framework import ConcurrentBuilder, WorkflowOutputEvent\n", + "\n", + "# Create specialized analysts\n", + "technical_analyst = ChatAgent(\n", + " name=\"TechnicalAnalyst\",\n", + " chat_client=chat_client,\n", + " instructions=\"You analyze technical feasibility and implementation complexity.\"\n", + ")\n", + "\n", + "business_analyst = ChatAgent(\n", + " name=\"BusinessAnalyst\",\n", + " chat_client=chat_client,\n", + " instructions=\"You analyze business value, ROI, and market impact.\"\n", + ")\n", + "\n", + "security_analyst = ChatAgent(\n", + " name=\"SecurityAnalyst\",\n", + " chat_client=chat_client,\n", + " instructions=\"You analyze security implications, risks, and compliance.\"\n", + ")\n", + "\n", + "print(\"✅ Analyst team created: Technical, Business, Security\")\n", + "\n", + "# Create concurrent workflow - all agents process in parallel\n", + "workflow = (\n", + " ConcurrentBuilder()\n", + " .participants([technical_analyst, business_analyst, security_analyst])\n", + " .build()\n", + ")\n", + "\n", + "# task = \"Evaluate the proposal to deploy Llama Stack for our customer service chatbot.\"\n", + "task = \"Evaluate the proposal to deploy a customer service chatbot.\"\n", + "\n", + "print(\"\\n\" + \"=\"*50)\n", + "print(\"Concurrent Analysis (Parallel Processing):\")\n", + "print(\"=\"*50)\n", + "\n", + "# Run workflow - agents work in parallel\n", + "async for event in workflow.run_stream(task):\n", + " if isinstance(event, WorkflowOutputEvent):\n", + " # Combined results from all agents\n", + " results = event.data\n", + " for i, result in enumerate(results, 1):\n", + " print(f\"\\n[Analysis {i}]:\")\n", + " print(result.text[:1000] + \"...\" if len(result.text or \"\") > 1000 else result.text)\n", + " print(\"-\" * 50)\n", + "\n", + "print(\"\\n\" + \"=\"*50)\n", + "print(\"All agents completed in parallel\")\n", + "print(\"=\"*50)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.7" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}