mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 01:48:05 +00:00
1018 lines
46 KiB
Text
1018 lines
46 KiB
Text
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# AutoGen + Llama Stack Integration\n",
|
|
"\n",
|
|
"[](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/autogen/autogen_llama_stack_integration.ipynb)\n",
|
|
"\n",
|
|
"## Overview\n",
|
|
"\n",
|
|
"This notebook demonstrates how to use **AutoGen v0.7.5** with **Llama Stack** as the backend.\n",
|
|
"\n",
|
|
"### Use Cases Covered:\n",
|
|
"1. **Two-Agent Conversation** - Teams working together on tasks\n",
|
|
"2. **Code Generation & Execution** - AutoGen generates and runs code\n",
|
|
"3. **Group Chat** - Multiple specialists collaborating \n",
|
|
"\n",
|
|
"---\n",
|
|
"\n",
|
|
"## Prerequisites\n",
|
|
"\n",
|
|
"```bash\n",
|
|
"# Install AutoGen v0.7.5 (new API)\n",
|
|
"pip install autogen-agentchat autogen-ext\n",
|
|
"\n",
|
|
"# Llama Stack should already be running\n",
|
|
"# Default: http://localhost:8321\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ AutoGen imports successful\n",
|
|
"Using AutoGen v0.7.5 with new team-based API\n",
|
|
"✅ Llama Stack is running at http://localhost:8321\n",
|
|
"Status: 200\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Imports\n",
|
|
"import os\n",
|
|
"import asyncio\n",
|
|
"from autogen_agentchat.agents import AssistantAgent, CodeExecutorAgent\n",
|
|
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
|
|
"from autogen_agentchat.base import TaskResult\n",
|
|
"from autogen_agentchat.messages import TextMessage\n",
|
|
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
|
|
"\n",
|
|
"print(\"✅ AutoGen imports successful\")\n",
|
|
"print(\"Using AutoGen v0.7.5 with new team-based API\")\n",
|
|
"\n",
|
|
"# Check Llama Stack connectivity\n",
|
|
"import httpx\n",
|
|
"\n",
|
|
"LLAMA_STACK_URL = \"http://localhost:8321\"\n",
|
|
"\n",
|
|
"try:\n",
|
|
" response = httpx.get(f\"{LLAMA_STACK_URL}/v1/models\")\n",
|
|
" print(f\"✅ Llama Stack is running at {LLAMA_STACK_URL}\")\n",
|
|
" print(f\"Status: {response.status_code}\")\n",
|
|
"except Exception as e:\n",
|
|
" print(f\"❌ Llama Stack not accessible: {e}\")\n",
|
|
" print(\"Make sure Llama Stack is running on port 8321\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Configuration: AutoGen v0.7.5 with Llama Stack\n",
|
|
"\n",
|
|
"### How It Works\n",
|
|
"\n",
|
|
"AutoGen v0.7.5 uses **OpenAIChatCompletionClient** to connect to OpenAI-compatible endpoints like Llama Stack's /v1/chat/completions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ Model client configured for Llama Stack\n",
|
|
"Model: ollama/llama3.3:70b\n",
|
|
"Base URL: http://localhost:8321/v1\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Create OpenAI-compatible client for Llama Stack\n",
|
|
"model_client = OpenAIChatCompletionClient(\n",
|
|
" model=\"ollama/llama3.3:70b\", # Choose any other model of your choice.\n",
|
|
" api_key=\"not-needed\",\n",
|
|
" base_url=\"http://localhost:8321/v1\", # For pointing to llama stack end points.\n",
|
|
" model_capabilities={\n",
|
|
" \"vision\": False,\n",
|
|
" \"function_calling\": True,\n",
|
|
" \"json_output\": True,\n",
|
|
" }\n",
|
|
")\n",
|
|
"\n",
|
|
"print(\"✅ Model client configured for Llama Stack\")\n",
|
|
"print(f\"Model: ollama/llama3.3:70b\")\n",
|
|
"print(f\"Base URL: http://localhost:8321/v1\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example 1: Simple Task with Assistant Agent\n",
|
|
"\n",
|
|
"### Pattern: Single Agent Task\n",
|
|
"\n",
|
|
"In v0.7.5, Autogen uses **Teams** to orchestrate agents, even for simple single-agent tasks.\n",
|
|
"\n",
|
|
"**AssistantAgent:**\n",
|
|
"- AI assistant powered by Llama Stack\n",
|
|
"- Executes tasks and provides responses\n",
|
|
"\n",
|
|
"### Use Case: Solve a Math Problem"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ Agent created: MathAssistant\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Task Result:\n",
|
|
"To find the sum of the first 10 prime numbers, we need to follow these steps:\n",
|
|
"\n",
|
|
"1. **Identify the first 10 prime numbers**: Prime numbers are natural numbers greater than 1 that have no divisors other than 1 and themselves.\n",
|
|
"\n",
|
|
"2. **List the first 10 prime numbers**:\n",
|
|
" - Start with 2 (the smallest prime number).\n",
|
|
" - Check each subsequent natural number to see if it is divisible by any prime number less than or equal to its square root. If not, it's a prime number.\n",
|
|
" - Continue until we have 10 prime numbers.\n",
|
|
"\n",
|
|
"3. **Calculate the sum** of these numbers.\n",
|
|
"\n",
|
|
"Let's list the first 10 prime numbers step by step:\n",
|
|
"\n",
|
|
"1. The smallest prime number is **2**.\n",
|
|
"2. The next prime number after 2 is **3**, since it's only divisible by 1 and itself.\n",
|
|
"3. Then comes **5**, because it has no divisors other than 1 and itself.\n",
|
|
"4. Next is **7**, for the same reason as above.\n",
|
|
"5. **11** is also a prime number, as it cannot be divided evenly by any number other than 1 and itself.\n",
|
|
"6. Following this pattern, we identify **13** as a prime number.\n",
|
|
"7. Then, **17**.\n",
|
|
"8. Next in line is **19**.\n",
|
|
"9. After that, we have **23**.\n",
|
|
"10. The tenth prime number is **29**.\n",
|
|
"\n",
|
|
"So, the first 10 prime numbers are: 2, 3, 5, 7, 11, 13, 17, 19, 23, and 29.\n",
|
|
"\n",
|
|
"Now, let's **calculate their sum**:\n",
|
|
"\n",
|
|
"- Start with 0 (or any starting number for summation).\n",
|
|
"- Add each prime number to the total:\n",
|
|
" - 0 + 2 = 2\n",
|
|
" - 2 + 3 = 5\n",
|
|
" - 5 + 5 = 10\n",
|
|
" - 10 + 7 = 17\n",
|
|
" - 17 + 11 = 28\n",
|
|
" - 28 + 13 = 41\n",
|
|
" - 41 + 17 = 58\n",
|
|
" - 58 + 19 = 77\n",
|
|
" - 77 + 23 = 100\n",
|
|
" - 100 + 29 = 129\n",
|
|
"\n",
|
|
"Therefore, the sum of the first 10 prime numbers is **129**.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"import asyncio\n",
|
|
"\n",
|
|
"# Create an AssistantAgent\n",
|
|
"assistant = AssistantAgent(\n",
|
|
" name=\"MathAssistant\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"You are a helpful AI assistant that solves math problems. Provide clear explanations and show your work.\"\n",
|
|
")\n",
|
|
"\n",
|
|
"print(\"✅ Agent created:\", assistant.name)\n",
|
|
"\n",
|
|
"# Define the task\n",
|
|
"task = \"What is the sum of the first 10 prime numbers? Please calculate it step by step.\"\n",
|
|
"\n",
|
|
"# Run the task (AutoGen v0.7.5 uses async)\n",
|
|
"async def run_simple_task():\n",
|
|
" # Create a simple team with just the assistant\n",
|
|
" team = RoundRobinGroupChat([assistant], max_turns=1)\n",
|
|
" result = await team.run(task=task)\n",
|
|
" return result\n",
|
|
"\n",
|
|
"# Execute in notebook\n",
|
|
"result = await run_simple_task()\n",
|
|
"\n",
|
|
"print(\"\\n\" + \"=\"*50)\n",
|
|
"print(\"Task Result:\")\n",
|
|
"print(result.messages[-1].content if result.messages else \"No response\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example 2: Multi-Agent Team Collaboration\n",
|
|
"\n",
|
|
"### Pattern: Multiple Agents Working Together\n",
|
|
"\n",
|
|
"In v0.7.5, Autogen uses **RoundRobinGroupChat** to create teams where agents take turns contributing to a task.\n",
|
|
"\n",
|
|
"### Use Case: Write a Technical Blog Post"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ Team agents created: Researcher, Writer, Critic\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Final Blog Post:\n",
|
|
"==================================================\n",
|
|
"Turn 1\n",
|
|
"\n",
|
|
"[user]: Write a 200-word blog post about the benefits of using Llama Stack for LLM applications.\n",
|
|
"\n",
|
|
" Steps:\n",
|
|
" 1. Researcher: Gather key information about Llama Stack\n",
|
|
" 2. Writer: Create the blog post\n",
|
|
" ...\n",
|
|
"Turn 2\n",
|
|
"\n",
|
|
"[Researcher]: **Unlocking Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"The Llama Stack is a cutting-edge framework designed to optimize Large Language Model (LLM) applications, offering numerous benefits for deve...\n",
|
|
"Turn 3\n",
|
|
"\n",
|
|
"[Writer]: **Unlocking Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"The Llama Stack is a revolutionary framework that optimizes Large Language Model (LLM) applications, offering numerous benefits for developer...\n",
|
|
"Turn 4\n",
|
|
"\n",
|
|
"[Critic]: **Reviewed Blog Post:**\n",
|
|
"\n",
|
|
"The provided blog post effectively highlights the benefits of using the Llama Stack for Large Language Model (LLM) applications. However, there are a few areas that could be i...\n",
|
|
"Turn 5\n",
|
|
"\n",
|
|
"[Researcher]: Here's a 200-word blog post about the benefits of using Llama Stack for LLM applications:\n",
|
|
"\n",
|
|
"**Unlocking Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"The Llama Stack is a revolutionary framework that ...\n",
|
|
"Turn 6\n",
|
|
"\n",
|
|
"[Writer]: **Unlocking Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"The Llama Stack is a game-changer for Large Language Model (LLM) applications, offering numerous benefits for developers and users. By utiliz...\n",
|
|
"Turn 7\n",
|
|
"\n",
|
|
"[Critic]: **Critic's Review:**\n",
|
|
"\n",
|
|
"The provided blog post effectively communicates the benefits of using the Llama Stack for Large Language Model (LLM) applications. Here are some key observations and suggestions ...\n",
|
|
"Turn 8\n",
|
|
"\n",
|
|
"[Researcher]: Here's a rewritten 200-word blog post about the benefits of using Llama Stack for LLM applications:\n",
|
|
"\n",
|
|
"**Unlock Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"In the rapidly evolving landscape of Large ...\n",
|
|
"Turn 9\n",
|
|
"\n",
|
|
"[Writer]: **Unlock Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"The Llama Stack revolutionizes Large Language Model (LLM) applications by providing a game-changing framework that optimizes development and dep...\n",
|
|
"Turn 10\n",
|
|
"\n",
|
|
"[Critic]: **Editor's Review:**\n",
|
|
"\n",
|
|
"The rewritten blog post effectively communicates the benefits of using the Llama Stack for Large Language Model (LLM) applications. Here are some key observations and suggestions...\n",
|
|
"Turn 11\n",
|
|
"\n",
|
|
"[Researcher]: Here's a rewritten 200-word blog post about the benefits of using Llama Stack for LLM applications:\n",
|
|
"\n",
|
|
"**Unlock Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"In the rapidly evolving landscape of Large ...\n",
|
|
"Turn 12\n",
|
|
"\n",
|
|
"[Writer]: **Unlock Efficient LLM Applications with Llama Stack**\n",
|
|
"\n",
|
|
"The rapidly evolving landscape of Large Language Models (LLMs) demands efficiency and scalability for success. The Llama Stack is a game-changin...\n",
|
|
"Turn 13\n",
|
|
"\n",
|
|
"[Critic]: **Editor's Review:**\n",
|
|
"\n",
|
|
"The rewritten blog post effectively communicates the benefits of using the Llama Stack for Large Language Model (LLM) applications. Here are some key observations and suggestions...\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Create specialist agents\n",
|
|
"researcher = AssistantAgent(\n",
|
|
" name=\"Researcher\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"You are a researcher. Provide accurate information, facts, and statistics about topics.\"\n",
|
|
")\n",
|
|
"\n",
|
|
"writer = AssistantAgent(\n",
|
|
" name=\"Writer\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"You are a technical writer. Write clear, engaging content based on research provided.\"\n",
|
|
")\n",
|
|
"\n",
|
|
"critic = AssistantAgent(\n",
|
|
" name=\"Critic\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"You are an editor. Review content for clarity, accuracy, and engagement. Suggest improvements.\"\n",
|
|
")\n",
|
|
"\n",
|
|
"print(\"✅ Team agents created: Researcher, Writer, Critic\")\n",
|
|
"\n",
|
|
"# Create a team with round-robin collaboration\n",
|
|
"async def run_blog_team():\n",
|
|
" team = RoundRobinGroupChat([researcher, writer, critic], max_turns=12)\n",
|
|
"\n",
|
|
" task = \"\"\"Write a 200-word blog post about the benefits of using Llama Stack for LLM applications.\n",
|
|
"\n",
|
|
" Steps:\n",
|
|
" 1. Researcher: Gather key information about Llama Stack\n",
|
|
" 2. Writer: Create the blog post\n",
|
|
" 3. Critic: Review and suggest improvements\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" result = await team.run(task=task)\n",
|
|
" return result\n",
|
|
"\n",
|
|
"# Run the team\n",
|
|
"result = await run_blog_team()\n",
|
|
"\n",
|
|
"print(\"\\n\" + \"=\"*50)\n",
|
|
"print(\"Final Blog Post:\")\n",
|
|
"print(\"=\"*50)\n",
|
|
"# Print the last message which should contain the final output\n",
|
|
"# for msg in result.messages[-3:]:\n",
|
|
"# print(f\"\\n[{msg.source}]: {msg.content[:200]}...\" if len(msg.content) > 200 else f\"\\n[{msg.source}]: {msg.content}\")\n",
|
|
"i=1\n",
|
|
"for msg in result.messages:\n",
|
|
" print (f\"Turn {i}\")\n",
|
|
" i+=1\n",
|
|
" print(f\"\\n[{msg.source}]: {msg.content[:200]}...\" if len(msg.content) > 200 else f\"\\n[{msg.source}]: {msg.content}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example 3: Multi-Turn Task\n",
|
|
"\n",
|
|
"### Pattern: Extended Team Collaboration\n",
|
|
"\n",
|
|
"Use longer conversations for problem-solving where agents need multiple rounds of discussion.\n",
|
|
"\n",
|
|
"### Use Case: Technical Analysis"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ Analyst agent created\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Analysis Result:\n",
|
|
"==================================================\n",
|
|
"Turn 1\n",
|
|
"Analyze the trade-offs between using local LLMs (like Llama via Llama Stack)\n",
|
|
" versus cloud-based APIs (like OpenAI) for production applications.\n",
|
|
" Consider: cost, latency, privacy, scalability, and maintenance.\n",
|
|
"==================================================\n",
|
|
"Turn 2\n",
|
|
"The debate between using local Large Language Models (LLMs) like Llama via Llama Stack and cloud-based APIs like OpenAI for production applications revolves around several key trade-offs. Here's a detailed analysis of the pros and cons of each approach considering cost, latency, privacy, scalability, and maintenance.\n",
|
|
"\n",
|
|
"### Local LLMs (e.g., Llama via Llama Stack)\n",
|
|
"\n",
|
|
"**Pros:**\n",
|
|
"1. **Privacy:** Running models locally can offer enhanced data privacy since sensitive information doesn't need to be transmitted over the internet or stored on third-party servers.\n",
|
|
"2. **Latency:** Local deployment typically results in lower latency for inference, as it eliminates the need for network requests and responses to cloud services.\n",
|
|
"3. **Customizability:** Users have full control over the model's training data, allowing for fine-tuning that is more tailored to their specific use case or industry.\n",
|
|
"4. **Dependence:** Reduced dependence on external APIs means applications are less vulnerable to service outages or changes in API terms of service.\n",
|
|
"\n",
|
|
"**Cons:**\n",
|
|
"1. **Cost:** While the cost per inference might be lower once models are set up, the initial investment for hardware and potentially personnel with expertise in machine learning can be high.\n",
|
|
"2. **Scalability:** Scaling local deployments to meet growing demand requires purchasing more powerful or additional servers, which can become prohibitively expensive.\n",
|
|
"3. **Maintenance:** Continuous updates to the model for maintaining performance or adapting to new data distributions require significant technical expertise and resource commitment.\n",
|
|
"\n",
|
|
"### Cloud-Based APIs (e.g., OpenAI)\n",
|
|
"\n",
|
|
"**Pros:**\n",
|
|
"1. **Scalability:** Cloud services can easily scale up or down based on demand, without requiring large upfront investments in hardware.\n",
|
|
"2. **Maintenance:** The maintenance burden, including model updates and security patches, is handled by the cloud provider.\n",
|
|
"3. **Accessibility:** Lower barrier to entry due to a lack of need for significant initial investment in hardware or ML expertise; users can start with basic development resources.\n",
|
|
"4. **Cost-Effectiveness:** Pricing models often include a free tier and are usually billed per use (e.g., per API call), making it more predictable and manageable for businesses with fluctuating demand.\n",
|
|
"\n",
|
|
"**Cons:**\n",
|
|
"1. **Privacy:** Sending data to cloud services may pose significant privacy risks, especially for sensitive information.\n",
|
|
"2. **Latency:** Network latency can impact the speed of inferences compared to local deployments.\n",
|
|
"3. **Dependence on Third Parties:** Applications relying on external APIs are at risk if those services change their pricing model, terms of service, or experience outages.\n",
|
|
"4. **Cost at Scale:** While cost-effective for small projects, as usage scales up, costs can quickly add up and become significant.\n",
|
|
"\n",
|
|
"### Recommendations\n",
|
|
"\n",
|
|
"- **Use Local LLMs:**\n",
|
|
" - When data privacy is paramount (e.g., in healthcare, finance).\n",
|
|
" - For applications requiring ultra-low latency.\n",
|
|
" - In scenarios where customizability of the model for a specific task or domain is critical.\n",
|
|
"\n",
|
|
"- **Use Cloud-Based APIs:**\n",
|
|
" - For proofs-of-concept, prototypes, or early-stage startups with variable and potentially low initial demand.\n",
|
|
" - When scalability needs are high and unpredictable, requiring rapid adjustments.\n",
|
|
" - In cases where the expertise and resources to manage local ML deployments are lacking.\n",
|
|
"\n",
|
|
"### Hybrid Approach\n",
|
|
"\n",
|
|
"A potential middle ground involves a hybrid approach: using cloud services for initial development and testing (benefiting from ease of use and scalability), and then transitioning to local deployment once the application has grown and can justify the investment in hardware and expertise. This transition point depends on various factors, including cost considerations, privacy requirements, and specific latency needs.\n",
|
|
"\n",
|
|
"In conclusion, the choice between local LLMs like Llama via Llama Stack and cloud-based APIs such as OpenAI for production applications hinges on a careful evaluation of trade-offs related to cost, latency, privacy, scalability, and maintenance. Each approach has its place, depending on the specific requirements and constraints of an application or project.\n",
|
|
"==================================================\n",
|
|
"Turn 3\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Turn 4\n",
|
|
"When planning your deployment strategy:\n",
|
|
"- **Evaluate Privacy Requirements:** If data privacy is a significant concern, favor local deployments.\n",
|
|
"- **Assess Scalability Needs:** For high variability in demand, cloud services might offer more flexibility.\n",
|
|
"- **Consider Cost Predictability:** Cloud APIs provide cost predictability for variable usage patterns but can become expensive at large scales. Local deployments have higher upfront costs but potentially lower long-term costs per inference.\n",
|
|
"\n",
|
|
"Ultimately, the best approach may involve a combination of both local and cloud solutions, tailored to meet the evolving needs of your application as it grows and matures.\n",
|
|
"==================================================\n",
|
|
"Turn 5\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Turn 6\n",
|
|
"**Final Consideration:** Regardless of which path you choose, ensure you have a deep understanding of your data's privacy implications and the legal requirements surrounding its handling. Additionally, maintaining flexibility in your architecture can allow for transitions between deployment strategies as needed.\n",
|
|
"==================================================\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Create an analyst agent\n",
|
|
"analyst = AssistantAgent(\n",
|
|
" name=\"TechAnalyst\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"\"\"You are a technical analyst. Analyze technical topics deeply:\n",
|
|
" 1. Break down complex concepts\n",
|
|
" 2. Identify pros and cons\n",
|
|
" 3. Provide recommendations\n",
|
|
" \"\"\"\n",
|
|
")\n",
|
|
"\n",
|
|
"print(\"✅ Analyst agent created\")\n",
|
|
"\n",
|
|
"# Run extended analysis\n",
|
|
"async def run_analysis():\n",
|
|
" team = RoundRobinGroupChat([analyst], max_turns=5)\n",
|
|
"\n",
|
|
" task = \"\"\"Analyze the trade-offs between using local LLMs (like Llama via Llama Stack)\n",
|
|
" versus cloud-based APIs (like OpenAI) for production applications.\n",
|
|
" Consider: cost, latency, privacy, scalability, and maintenance.\"\"\"\n",
|
|
"\n",
|
|
" result = await team.run(task=task)\n",
|
|
" return result\n",
|
|
"\n",
|
|
"result = await run_analysis()\n",
|
|
"\n",
|
|
"print(\"\\n\" + \"=\"*50)\n",
|
|
"print(\"Analysis Result:\")\n",
|
|
"print(\"=\"*50)\n",
|
|
"i=1\n",
|
|
"for message in result.messages:\n",
|
|
" print (f\"Turn {i}\")\n",
|
|
" i+=1\n",
|
|
" print(message.content)\n",
|
|
" print(\"=\"*50)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example 4: Advanced Termination Conditions\n",
|
|
"\n",
|
|
"### Pattern: Code Review Loop with Stopping Logic\n",
|
|
"\n",
|
|
"This example demonstrates termination using:\n",
|
|
"1. **Multiple agents** in a review loop\n",
|
|
"2. **Termination on approval** - Stops when reviewer says \"LGTM\"\n",
|
|
"3. **Fallback with max_turns** for safety\n",
|
|
"\n",
|
|
"### Use Case: Iterative Code Review Until Approved"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ Code review team created\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"✅ Review completed in 5 message(s)\n",
|
|
"Stop reason: Text 'LGTM' mentioned\n",
|
|
"==================================================\n",
|
|
"\n",
|
|
"📝 Review Conversation Flow:\n",
|
|
"1. [user]: Implement a Python function to check if a string is a palindrome. The Developer should implement the function first. The Reviewer should then...\n",
|
|
"2. [Developer]: ### Initial Implementation ```python def is_palindrome(s: str) -> bool: \"\"\" Checks if a given string is a palindrome. Args: s (st...\n",
|
|
"3. [CodeReviewer]: ### Code Review Feedback #### Bugs and Edge Cases * The function does not handle non-string inputs. It should raise a `TypeError` when given a non-st...\n",
|
|
"4. [Developer]: ### Revised Implementation ```python def is_palindrome(s: str, ignore_case: bool = True, ignore_whitespace_and_punctuation: bool = True) -> bool: ...\n",
|
|
"5. [CodeReviewer]: ### Code Review Feedback The revised implementation has addressed all the concerns raised during the initial code review. Here's a summary of the key...\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Final Code (last message):\n",
|
|
"==================================================\n",
|
|
"### Code Review Feedback\n",
|
|
"\n",
|
|
"The revised implementation has addressed all the concerns raised during the initial code review. Here's a summary of the key points:\n",
|
|
"\n",
|
|
"* **Type checking**: The function now correctly raises a `TypeError` if the input is not a string.\n",
|
|
"* **Optional parameters**: The addition of optional parameters for ignoring case and whitespace/punctuation provides flexibility in how palindromes are checked.\n",
|
|
"* **Preprocessing**: The preprocessing steps to ignore case and remove non-alphanumeric characters are implemented correctly and efficiently.\n",
|
|
"* **Efficient palindrome check**: The two-pointer approach used to compare characters from both ends of the string is efficient, with a time complexity of O(n).\n",
|
|
"* **Documentation and examples**: The docstring has been improved with clear explanations and examples, making it easier for users to understand how to use the function.\n",
|
|
"\n",
|
|
"#### Minor Suggestions\n",
|
|
"\n",
|
|
"1. **Input Validation**: Consider adding input validation for the optional parameters `ignore_case` and `ignore_whitespace_and_punctuation`. Currently, they are expected to be boolean values, but there is no explicit check for this.\n",
|
|
"2. **Type Hints for Optional Parameters**: While not required, adding type hints for the optional parameters can improve code readability and help catch potential errors.\n",
|
|
"\n",
|
|
"#### Revised Implementation with Minor Suggestions\n",
|
|
"\n",
|
|
"```python\n",
|
|
"def is_palindrome(s: str, ignore_case: bool = True, ignore_whitespace_and_punctuation: bool = True) -> bool:\n",
|
|
" \"\"\"\n",
|
|
" Checks if a given string is a palindrome.\n",
|
|
"\n",
|
|
" Args:\n",
|
|
" s (str): The input string to check.\n",
|
|
" ignore_case (bool): Whether to ignore case when checking for palindromes. Defaults to True.\n",
|
|
" ignore_whitespace_and_punctuation (bool): Whether to ignore whitespace and punctuation when checking for palindromes. Defaults to True.\n",
|
|
"\n",
|
|
" Returns:\n",
|
|
" bool: True if the string is a palindrome, False otherwise.\n",
|
|
"\n",
|
|
" Raises:\n",
|
|
" TypeError: If s is not a string or if ignore_case/ignore_whitespace_and_punctuation are not boolean values.\n",
|
|
"\n",
|
|
" Examples:\n",
|
|
" >>> is_palindrome(\"radar\")\n",
|
|
" True\n",
|
|
" >>> is_palindrome(\"hello\")\n",
|
|
" False\n",
|
|
" >>> is_palindrome(\"A man, a plan, a canal: Panama\", ignore_whitespace_and_punctuation=True)\n",
|
|
" True\n",
|
|
" \"\"\"\n",
|
|
" # Check input type\n",
|
|
" if not isinstance(s, str):\n",
|
|
" raise TypeError(\"Input must be a string\")\n",
|
|
" \n",
|
|
" # Validate optional parameters\n",
|
|
" if not isinstance(ignore_case, bool) or not isinstance(ignore_whitespace_and_punctuation, bool):\n",
|
|
" raise TypeError(\"Optional parameters must be boolean values\")\n",
|
|
"\n",
|
|
" # Preprocess the string based on options\n",
|
|
" if ignore_case:\n",
|
|
" s = s.casefold()\n",
|
|
" if ignore_whitespace_and_punctuation:\n",
|
|
" s = ''.join(c for c in s if c.isalnum())\n",
|
|
"\n",
|
|
" # Compare characters from both ends of the string, moving towards the center\n",
|
|
" left, right = 0, len(s) - 1\n",
|
|
" while left < right:\n",
|
|
" if s[left] != s[right]:\n",
|
|
" return False\n",
|
|
" left += 1\n",
|
|
" right -= 1\n",
|
|
" return True\n",
|
|
"\n",
|
|
"# Example usage:\n",
|
|
"if __name__ == \"__main__\":\n",
|
|
" print(is_palindrome(\"radar\")) # Expected output: True\n",
|
|
" print(is_palindrome(\"hello\")) # Expected output: False\n",
|
|
" print(is_palindrome(\"A man, a plan, a canal: Panama\", ignore_whitespace_and_punctuation=True)) # Expected output: True\n",
|
|
"```\n",
|
|
"\n",
|
|
"With these minor suggestions addressed, the code is robust and follows best practices for readability, maintainability, and error handling.\n",
|
|
"\n",
|
|
"LGTM!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from autogen_agentchat.conditions import TextMentionTermination\n",
|
|
"\n",
|
|
"# Create code review agents\n",
|
|
"code_reviewer = AssistantAgent(\n",
|
|
" name=\"CodeReviewer\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"\"\"You are a senior code reviewer. Review code for:\n",
|
|
" - Bugs and edge cases\n",
|
|
" - Performance issues\n",
|
|
" - Security vulnerabilities\n",
|
|
" - Best practices\n",
|
|
"\n",
|
|
" If the code looks good, say 'LGTM' (Looks Good To Me).\n",
|
|
" If issues found, provide specific feedback for improvement.\"\"\"\n",
|
|
")\n",
|
|
"\n",
|
|
"code_developer = AssistantAgent(\n",
|
|
" name=\"Developer\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"\"\"You are a developer. When you receive code review feedback:\n",
|
|
" - Address ALL issues mentioned\n",
|
|
" - Explain your changes\n",
|
|
" - Present the improved code\n",
|
|
"\n",
|
|
" If no feedback is given, present your initial implementation.\"\"\"\n",
|
|
")\n",
|
|
"\n",
|
|
"print(\"✅ Code review team created\")\n",
|
|
"\n",
|
|
"# Complex termination: Stops when reviewer approves OR max iterations reached\n",
|
|
"async def run_code_review_loop():\n",
|
|
" # Stop when reviewer says \"LGTM\"\n",
|
|
" approval_termination = TextMentionTermination(\"LGTM\")\n",
|
|
"\n",
|
|
" team = RoundRobinGroupChat(\n",
|
|
" [code_developer, code_reviewer],\n",
|
|
" max_turns=16, # Max 4 review cycles (developer + reviewer = 2 turns per cycle)\n",
|
|
" termination_condition=approval_termination\n",
|
|
" )\n",
|
|
"\n",
|
|
" task = \"\"\"Implement a Python function to check if a string is a palindrome.\n",
|
|
"\n",
|
|
" The Developer should implement the function first.\n",
|
|
" The Reviewer should then review it and provide feedback.\n",
|
|
" Continue iterating until the Reviewer approves the code.\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" result = await team.run(task=task)\n",
|
|
" return result\n",
|
|
"\n",
|
|
"result = await run_code_review_loop()\n",
|
|
"\n",
|
|
"print(\"\\n\" + \"=\"*50)\n",
|
|
"print(f\"✅ Review completed in {len(result.messages)} message(s)\")\n",
|
|
"print(f\"Stop reason: {result.stop_reason}\")\n",
|
|
"print(\"=\"*50)\n",
|
|
"\n",
|
|
"# Show the conversation flow\n",
|
|
"print(\"\\n📝 Review Conversation Flow:\")\n",
|
|
"for i, msg in enumerate(result.messages, 1):\n",
|
|
" preview = msg.content[:150].replace('\\n', ' ')\n",
|
|
" print(f\"{i}. [{msg.source}]: {preview}...\")\n",
|
|
"\n",
|
|
"print(\"\\n\" + \"=\"*50)\n",
|
|
"print(\"Final Code (last message):\")\n",
|
|
"print(\"=\"*50)\n",
|
|
"if result.messages:\n",
|
|
" print(result.messages[-1].content)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Example 5: Practical Team Use Case\n",
|
|
"\n",
|
|
"### Pattern: Research → Write → Review Pipeline\n",
|
|
"\n",
|
|
"A common pattern in content creation: research, draft, review, finalize.\n",
|
|
"\n",
|
|
"### Use Case: Documentation Generator"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"✅ Documentation team created\n",
|
|
"\n",
|
|
"==================================================\n",
|
|
"Generated Documentation:\n",
|
|
"==================================================\n",
|
|
"Turn 1\n",
|
|
"Create documentation for a hypothetical food recipe:\n",
|
|
"\n",
|
|
" Food: `Cheese Pizza`\n",
|
|
"\n",
|
|
" Include:\n",
|
|
" - Description\n",
|
|
" - Ingredients\n",
|
|
" - How to make it\n",
|
|
" - Steps\n",
|
|
" \n",
|
|
"Turn 2\n",
|
|
"**Cheese Pizza Recipe Documentation**\n",
|
|
"=====================================\n",
|
|
"\n",
|
|
"### Description\n",
|
|
"\n",
|
|
"A classic Cheese Pizza is a delicious and satisfying dish that consists of a crispy crust topped with a rich tomato sauce, melted mozzarella cheese, and various seasonings. This recipe provides a simple and easy-to-follow guide to making a mouth-watering Cheese Pizza at home.\n",
|
|
"\n",
|
|
"### Ingredients\n",
|
|
"\n",
|
|
"* **Crust:**\n",
|
|
"\t+ 2 cups of warm water\n",
|
|
"\t+ 1 tablespoon of sugar\n",
|
|
"\t+ 2 teaspoons of active dry yeast\n",
|
|
"\t+ 3 cups of all-purpose flour\n",
|
|
"\t+ 1 teaspoon of salt\n",
|
|
"\t+ 2 tablespoons of olive oil\n",
|
|
"* **Sauce:**\n",
|
|
"\t+ 2 cups of crushed tomatoes\n",
|
|
"\t+ 1/4 cup of olive oil\n",
|
|
"\t+ 4 cloves of garlic, minced\n",
|
|
"\t+ 1 teaspoon of dried oregano\n",
|
|
"\t+ Salt and pepper to taste\n",
|
|
"* **Toppings:**\n",
|
|
"\t+ 8 ounces of mozzarella cheese, shredded\n",
|
|
"\t+ Fresh basil leaves, chopped (optional)\n",
|
|
"\n",
|
|
"### How to Make It\n",
|
|
"\n",
|
|
"To make a Cheese Pizza, follow these steps:\n",
|
|
"\n",
|
|
"#### Steps\n",
|
|
"\n",
|
|
"1. **Activate the Yeast:**\n",
|
|
"\t* In a large bowl, combine the warm water, sugar, and yeast.\n",
|
|
"\t* Stir gently to dissolve the yeast, and let it sit for 5-10 minutes until frothy.\n",
|
|
"2. **Make the Crust:**\n",
|
|
"\t* Add the flour, salt, and olive oil to the bowl with the yeast mixture.\n",
|
|
"\t* Mix the dough until it comes together in a ball.\n",
|
|
"\t* Knead the dough on a floured surface for 5-10 minutes until smooth and elastic.\n",
|
|
"3. **Prepare the Sauce:**\n",
|
|
"\t* In a separate bowl, combine the crushed tomatoes, olive oil, garlic, oregano, salt, and pepper.\n",
|
|
"\t* Mix well to create a smooth sauce.\n",
|
|
"4. **Assemble the Pizza:**\n",
|
|
"\t* Preheat the oven to 425°F (220°C).\n",
|
|
"\t* Roll out the dough into a circle or rectangle shape, depending on your preference.\n",
|
|
"\t* Place the dough on a baking sheet or pizza stone.\n",
|
|
"\t* Spread the tomato sauce evenly over the dough, leaving a small border around the edges.\n",
|
|
"5. **Add the Cheese:**\n",
|
|
"\t* Sprinkle the shredded mozzarella cheese over the sauce.\n",
|
|
"6. **Bake the Pizza:**\n",
|
|
"\t* Bake the pizza in the preheated oven for 15-20 minutes until the crust is golden brown and the cheese is melted and bubbly.\n",
|
|
"7. **Garnish with Fresh Basil (Optional):**\n",
|
|
"\t* Remove the pizza from the oven and sprinkle chopped fresh basil leaves over the top, if desired.\n",
|
|
"\n",
|
|
"### Tips and Variations\n",
|
|
"\n",
|
|
"* For a crispy crust, bake the pizza for an additional 2-3 minutes.\n",
|
|
"* Add other toppings such as pepperoni, sausage, or mushrooms to create a unique flavor combination.\n",
|
|
"* Use different types of cheese, such as cheddar or parmesan, for a varied flavor profile.\n",
|
|
"\n",
|
|
"Enjoy your delicious homemade Cheese Pizza!\n",
|
|
"Turn 3\n",
|
|
"**Cheese Pizza Recipe Documentation**\n",
|
|
"=====================================\n",
|
|
"\n",
|
|
"### Description\n",
|
|
"\n",
|
|
"A classic Cheese Pizza is a delicious and satisfying dish that consists of a crispy crust topped with a rich tomato sauce, melted mozzarella cheese, and various seasonings. This recipe provides a simple and easy-to-follow guide to making a mouth-watering Cheese Pizza at home.\n",
|
|
"\n",
|
|
"### Ingredients\n",
|
|
"\n",
|
|
"* **Crust:**\n",
|
|
"\t+ 2 cups of warm water\n",
|
|
"\t+ 1 tablespoon of sugar\n",
|
|
"\t+ 2 teaspoons of active dry yeast\n",
|
|
"\t+ 3 cups of all-purpose flour\n",
|
|
"\t+ 1 teaspoon of salt\n",
|
|
"\t+ 2 tablespoons of olive oil\n",
|
|
"* **Sauce:**\n",
|
|
"\t+ 2 cups of crushed tomatoes\n",
|
|
"\t+ 1/4 cup of olive oil\n",
|
|
"\t+ 4 cloves of garlic, minced\n",
|
|
"\t+ 1 teaspoon of dried oregano\n",
|
|
"\t+ Salt and pepper to taste\n",
|
|
"* **Toppings:**\n",
|
|
"\t+ 8 ounces of mozzarella cheese, shredded\n",
|
|
"\t+ Fresh basil leaves, chopped (optional)\n",
|
|
"\n",
|
|
"### How to Make It\n",
|
|
"\n",
|
|
"To make a Cheese Pizza, follow these steps:\n",
|
|
"\n",
|
|
"#### Steps\n",
|
|
"\n",
|
|
"1. **Activate the Yeast:**\n",
|
|
"\t* In a large bowl, combine the warm water, sugar, and yeast.\n",
|
|
"\t* Stir gently to dissolve the yeast, and let it sit for 5-10 minutes until frothy.\n",
|
|
"2. **Make the Crust:**\n",
|
|
"\t* Add the flour, salt, and olive oil to the bowl with the yeast mixture.\n",
|
|
"\t* Mix the dough until it comes together in a ball.\n",
|
|
"\t* Knead the dough on a floured surface for 5-10 minutes until smooth and elastic.\n",
|
|
"3. **Prepare the Sauce:**\n",
|
|
"\t* In a separate bowl, combine the crushed tomatoes, olive oil, garlic, oregano, salt, and pepper.\n",
|
|
"\t* Mix well to create a smooth sauce.\n",
|
|
"4. **Assemble the Pizza:**\n",
|
|
"\t* Preheat the oven to 425°F (220°C).\n",
|
|
"\t* Roll out the dough into a circle or rectangle shape, depending on your preference.\n",
|
|
"\t* Place the dough on a baking sheet or pizza stone.\n",
|
|
"\t* Spread the tomato sauce evenly over the dough, leaving a small border around the edges.\n",
|
|
"5. **Add the Cheese:**\n",
|
|
"\t* Sprinkle the shredded mozzarella cheese over the sauce.\n",
|
|
"6. **Bake the Pizza:**\n",
|
|
"\t* Bake the pizza in the preheated oven for 15-20 minutes until the crust is golden brown and the cheese is melted and bubbly.\n",
|
|
"7. **Garnish with Fresh Basil (Optional):**\n",
|
|
"\t* Remove the pizza from the oven and sprinkle chopped fresh basil leaves over the top, if desired.\n",
|
|
"\n",
|
|
"### Tips and Variations\n",
|
|
"\n",
|
|
"* For a crispy crust, bake the pizza for an additional 2-3 minutes.\n",
|
|
"* Add other toppings such as pepperoni, sausage, or mushrooms to create a unique flavor combination.\n",
|
|
"* Use different types of cheese, such as cheddar or parmesan, for a varied flavor profile.\n",
|
|
"\n",
|
|
"Enjoy your delicious homemade Cheese Pizza!\n",
|
|
"Turn 4\n",
|
|
"It seems like you've copied the entire Cheese Pizza Recipe Documentation I provided earlier. If you'd like to make any changes or additions to the recipe, or if you have any questions about it, feel free to ask and I'll be happy to help! \n",
|
|
"\n",
|
|
"If you're looking for some suggestions on how to improve the documentation, here are a few ideas:\n",
|
|
"\n",
|
|
"1. **Add nutritional information**: Providing the nutritional content of the Cheese Pizza, such as calories, fat, carbohydrates, and protein, can be helpful for people who are tracking their diet.\n",
|
|
"2. **Include images or diagrams**: Adding images or diagrams of the different steps in the recipe can help to clarify the process and make it easier to follow.\n",
|
|
"3. **Provide variations for special diets**: Offering suggestions for how to modify the recipe to accommodate special dietary needs, such as gluten-free or vegan, can make the recipe more accessible to a wider range of people.\n",
|
|
"4. **Add a troubleshooting section**: Including a section that addresses common problems or issues that may arise during the cooking process, such as a crust that's too thick or cheese that's not melting properly, can help to ensure that the recipe turns out well.\n",
|
|
"\n",
|
|
"Let me know if you have any other questions or if there's anything else I can help with!\n",
|
|
"Turn 5\n",
|
|
"I apologize for copying the entire documentation earlier. You're right; it would be more helpful to provide suggestions or improvements to the existing recipe. Here are some potential additions and modifications:\n",
|
|
"\n",
|
|
"1. **Nutritional Information:**\n",
|
|
"\t* Calculating the nutritional content of the Cheese Pizza could involve breaking down the ingredients and their corresponding calorie, fat, carbohydrate, and protein contributions.\n",
|
|
"\t* For example:\n",
|
|
"\t\t+ Crust (2 cups flour, 1 teaspoon sugar, 1/4 cup olive oil): approximately 300-350 calories, 10-12g fat, 50-60g carbohydrates, 10-12g protein\n",
|
|
"\t\t+ Sauce (2 cups crushed tomatoes, 1/4 cup olive oil, 4 cloves garlic, 1 teaspoon dried oregano): approximately 100-150 calories, 10-12g fat, 20-25g carbohydrates, 5-7g protein\n",
|
|
"\t\t+ Cheese (8 ounces mozzarella): approximately 200-250 calories, 15-18g fat, 5-7g carbohydrates, 20-25g protein\n",
|
|
"\t* Total estimated nutritional content: approximately 600-750 calories, 35-42g fat, 75-92g carbohydrates, 35-44g protein (per serving)\n",
|
|
"2. **Images and Diagrams:**\n",
|
|
"\t* Adding images of the different steps in the recipe could help illustrate the process, such as:\n",
|
|
"\t\t+ A photo of the yeast mixture after it has sat for 5-10 minutes\n",
|
|
"\t\t+ An image of the dough being kneaded on a floured surface\n",
|
|
"\t\t+ A diagram showing how to roll out the dough into a circle or rectangle shape\n",
|
|
"3. **Variations for Special Diets:**\n",
|
|
"\t* Gluten-free crust option:\n",
|
|
"\t\t+ Replace all-purpose flour with gluten-free flour blend (containing rice flour, potato starch, and tapioca flour)\n",
|
|
"\t\t+ Add xanthan gum to help improve texture and structure\n",
|
|
"\t* Vegan cheese alternative:\n",
|
|
"\t\t+ Use vegan mozzarella cheese or soy-based cheese alternative\n",
|
|
"\t\t+ Consider adding nutritional yeast for a cheesy flavor\n",
|
|
"4. **Troubleshooting Section:**\n",
|
|
"\t* Common issues with the crust:\n",
|
|
"\t\t+ Too thick: try reducing the amount of flour or increasing the yeast fermentation time\n",
|
|
"\t\t+ Too thin: try increasing the amount of flour or adding more water\n",
|
|
"\t* Issues with the cheese melting:\n",
|
|
"\t\t+ Not melting properly: try broiling the pizza for an additional 1-2 minutes or using a higher-quality mozzarella cheese\n",
|
|
"\n",
|
|
"Please let me know if you'd like to discuss any of these suggestions further or if there's anything else I can help with!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Create documentation team\n",
|
|
"doc_researcher = AssistantAgent(\n",
|
|
" name=\"DocResearcher\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"You research technical topics and gather key information for documentation.\"\n",
|
|
")\n",
|
|
"\n",
|
|
"doc_writer = AssistantAgent(\n",
|
|
" name=\"DocWriter\",\n",
|
|
" model_client=model_client,\n",
|
|
" system_message=\"You write clear, concise technical documentation with examples.\"\n",
|
|
")\n",
|
|
"\n",
|
|
"print(\"✅ Documentation team created\")\n",
|
|
"\n",
|
|
"# Run documentation pipeline\n",
|
|
"async def create_documentation():\n",
|
|
" team = RoundRobinGroupChat([doc_researcher, doc_writer], max_turns=4)\n",
|
|
" task = \"\"\"Create documentation for a hypothetical food recipe:\n",
|
|
"\n",
|
|
" Food: `Cheese Pizza`\n",
|
|
"\n",
|
|
" Include:\n",
|
|
" - Description\n",
|
|
" - Ingredients\n",
|
|
" - How to make it\n",
|
|
" - Steps\n",
|
|
" \"\"\"\n",
|
|
"\n",
|
|
" result = await team.run(task=task)\n",
|
|
" return result\n",
|
|
"\n",
|
|
"result = await create_documentation()\n",
|
|
"\n",
|
|
"print(\"\\n\" + \"=\"*50)\n",
|
|
"print(\"Generated Documentation:\")\n",
|
|
"print(\"=\"*50)\n",
|
|
"i=1\n",
|
|
"for message in result.messages:\n",
|
|
" print(f\"Turn {i}\")\n",
|
|
" i+=1\n",
|
|
" print(message.content)\n",
|
|
"\n",
|
|
"# Turn 1: `DocResearcher` receives the task → researches the topic\n",
|
|
"# Turn 2: `DocWriter` sees the task + researcher's output → writes documentation\n",
|
|
"# Turn 3**: `DocResearcher` sees everything → can add more info\n",
|
|
"# Turn 4: `DocWriter` sees everything → refines documentation\n",
|
|
"# Stops at `max_turns=4`\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Next Steps\n",
|
|
"\n",
|
|
"1. **Install autogen-ext**: `pip install autogen-agentchat autogen-ext`\n",
|
|
"2. **Start Llama Stack**: Ensure it's running on `http://localhost:8321`\n",
|
|
"3. **Experiment**: Try different team compositions and task types\n",
|
|
"4. **Explore**: Check out SelectorGroupChat and other team types\n",
|
|
"\n",
|
|
"### Resources\n",
|
|
"\n",
|
|
"- **AutoGen v0.7.5 Docs**: https://microsoft.github.io/autogen/\n",
|
|
"- **Llama Stack Docs**: https://llama-stack.readthedocs.io/"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.12.7"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 4
|
|
}
|