llama-stack-mirror/docs/zero_to_hero_guide/03_Tool_Calling101.ipynb
2024-11-06 17:54:38 -08:00

395 lines
20 KiB
Text

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool Calling\n",
"\n",
"Before you begin, please ensure Llama Stack is installed and set up by following the [Getting Started Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this section, we'll explore how to enhance your applications with tool calling capabilities. We'll cover:\n",
"1. Setting up and using the Brave Search API\n",
"2. Creating custom tools\n",
"3. Configuring tool prompts and safety settings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set up your connection parameters:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"HOST = \"localhost\" # Replace with your host\n",
"PORT = 5000 # Replace with your port"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"import os\n",
"from typing import Dict, List, Optional\n",
"from dotenv import load_dotenv\n",
"\n",
"from llama_stack_client import LlamaStackClient\n",
"from llama_stack_client.lib.agents.agent import Agent\n",
"from llama_stack_client.lib.agents.event_logger import EventLogger\n",
"from llama_stack_client.types.agent_create_params import (\n",
" AgentConfig,\n",
" AgentConfigToolSearchToolDefinition,\n",
")\n",
"\n",
"# Load environment variables\n",
"load_dotenv()\n",
"\n",
"# Helper function to create an agent with tools\n",
"async def create_tool_agent(\n",
" client: LlamaStackClient,\n",
" tools: List[Dict],\n",
" instructions: str = \"You are a helpful assistant\",\n",
" model: str = \"Llama3.2-11B-Vision-Instruct\",\n",
") -> Agent:\n",
" \"\"\"Create an agent with specified tools.\"\"\"\n",
" print(\"Using the following model: \", model)\n",
" agent_config = AgentConfig(\n",
" model=model,\n",
" instructions=instructions,\n",
" sampling_params={\n",
" \"strategy\": \"greedy\",\n",
" \"temperature\": 1.0,\n",
" \"top_p\": 0.9,\n",
" },\n",
" tools=tools,\n",
" tool_choice=\"auto\",\n",
" tool_prompt_format=\"json\",\n",
" enable_session_persistence=True,\n",
" )\n",
"\n",
" return Agent(client, agent_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, create a `.env` file in your notebook directory with your Brave Search API key:\n",
"\n",
"```\n",
"BRAVE_SEARCH_API_KEY=your_key_here\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using the following model: Llama3.2-11B-Vision-Instruct\n",
"\n",
"Query: What are the latest developments in quantum computing?\n",
"--------------------------------------------------\n",
"\u001b[30m\u001b[0m\u001b[33minference> \u001b[0m\u001b[33mF\u001b[0m\u001b[33mIND\u001b[0m\u001b[33mINGS\u001b[0m\u001b[33m:\n",
"\u001b[0m\u001b[33mThe\u001b[0m\u001b[33m latest\u001b[0m\u001b[33m developments\u001b[0m\u001b[33m in\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m computing\u001b[0m\u001b[33m include\u001b[0m\u001b[33m the\u001b[0m\u001b[33m creation\u001b[0m\u001b[33m of\u001b[0m\u001b[33m more\u001b[0m\u001b[33m powerful\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m processors\u001b[0m\u001b[33m,\u001b[0m\u001b[33m advancements\u001b[0m\u001b[33m in\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m error\u001b[0m\u001b[33m correction\u001b[0m\u001b[33m,\u001b[0m\u001b[33m and\u001b[0m\u001b[33m the\u001b[0m\u001b[33m development\u001b[0m\u001b[33m of\u001b[0m\u001b[33m new\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m algorithms\u001b[0m\u001b[33m.\u001b[0m\u001b[33m For\u001b[0m\u001b[33m example\u001b[0m\u001b[33m,\u001b[0m\u001b[33m Google\u001b[0m\u001b[33m has\u001b[0m\u001b[33m announced\u001b[0m\u001b[33m the\u001b[0m\u001b[33m development\u001b[0m\u001b[33m of\u001b[0m\u001b[33m a\u001b[0m\u001b[33m \u001b[0m\u001b[33m53\u001b[0m\u001b[33m-q\u001b[0m\u001b[33mubit\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m processor\u001b[0m\u001b[33m,\u001b[0m\u001b[33m which\u001b[0m\u001b[33m is\u001b[0m\u001b[33m the\u001b[0m\u001b[33m largest\u001b[0m\u001b[33m and\u001b[0m\u001b[33m most\u001b[0m\u001b[33m complex\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m processor\u001b[0m\u001b[33m ever\u001b[0m\u001b[33m built\u001b[0m\u001b[33m (\u001b[0m\u001b[33mSource\u001b[0m\u001b[33m:\u001b[0m\u001b[33m Google\u001b[0m\u001b[33m Blog\u001b[0m\u001b[33m).\u001b[0m\u001b[33m Additionally\u001b[0m\u001b[33m,\u001b[0m\u001b[33m researchers\u001b[0m\u001b[33m have\u001b[0m\u001b[33m made\u001b[0m\u001b[33m progress\u001b[0m\u001b[33m in\u001b[0m\u001b[33m developing\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m error\u001b[0m\u001b[33m correction\u001b[0m\u001b[33m techniques\u001b[0m\u001b[33m,\u001b[0m\u001b[33m which\u001b[0m\u001b[33m are\u001b[0m\u001b[33m necessary\u001b[0m\u001b[33m for\u001b[0m\u001b[33m large\u001b[0m\u001b[33m-scale\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m computing\u001b[0m\u001b[33m (\u001b[0m\u001b[33mSource\u001b[0m\u001b[33m:\u001b[0m\u001b[33m NASA\u001b[0m\u001b[33m).\u001b[0m\u001b[33m Furthermore\u001b[0m\u001b[33m,\u001b[0m\u001b[33m new\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m algorithms\u001b[0m\u001b[33m have\u001b[0m\u001b[33m been\u001b[0m\u001b[33m developed\u001b[0m\u001b[33m for\u001b[0m\u001b[33m solving\u001b[0m\u001b[33m specific\u001b[0m\u001b[33m problems\u001b[0m\u001b[33m,\u001b[0m\u001b[33m such\u001b[0m\u001b[33m as\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m simulation\u001b[0m\u001b[33m and\u001b[0m\u001b[33m machine\u001b[0m\u001b[33m learning\u001b[0m\u001b[33m (\u001b[0m\u001b[33mSource\u001b[0m\u001b[33m:\u001b[0m\u001b[33m IBM\u001b[0m\u001b[33m Research\u001b[0m\u001b[33m).\n",
"\n",
"\u001b[0m\u001b[33mS\u001b[0m\u001b[33mOURCES\u001b[0m\u001b[33m:\n",
"\u001b[0m\u001b[33m-\u001b[0m\u001b[33m Google\u001b[0m\u001b[33m Blog\u001b[0m\u001b[33m:\u001b[0m\u001b[33m \"\u001b[0m\u001b[33mInt\u001b[0m\u001b[33mroducing\u001b[0m\u001b[33m B\u001b[0m\u001b[33mrist\u001b[0m\u001b[33mle\u001b[0m\u001b[33mcone\u001b[0m\u001b[33m,\u001b[0m\u001b[33m a\u001b[0m\u001b[33m \u001b[0m\u001b[33m72\u001b[0m\u001b[33m-q\u001b[0m\u001b[33mubit\u001b[0m\u001b[33m quantum\u001b[0m\u001b[33m processor\u001b[0m\u001b[33m\"\u001b[0m\u001b[33m (\u001b[0m\u001b[33mhttps\u001b[0m\u001b[33m://\u001b[0m\u001b[33mblog\u001b[0m\u001b[33m.google\u001b[0m\u001b[33m/\u001b[0m\u001b[33mtechnology\u001b[0m\u001b[33m/\u001b[0m\u001b[33mai\u001b[0m\u001b[33m/\u001b[0m\u001b[33mbr\u001b[0m\u001b[33mistle\u001b[0m\u001b[33mcone\u001b[0m\u001b[33m-\u001b[0m\u001b[33m72\u001b[0m\u001b[33m-q\u001b[0m\u001b[33mubit\u001b[0m\u001b[33m-\u001b[0m\u001b[33mquant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m-\u001b[0m\u001b[33mprocessor\u001b[0m\u001b[33m/)\n",
"\u001b[0m\u001b[33m-\u001b[0m\u001b[33m NASA\u001b[0m\u001b[33m:\u001b[0m\u001b[33m \"\u001b[0m\u001b[33mQuant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m Error\u001b[0m\u001b[33m Correction\u001b[0m\u001b[33m\"\u001b[0m\u001b[33m (\u001b[0m\u001b[33mhttps\u001b[0m\u001b[33m://\u001b[0m\u001b[33mn\u001b[0m\u001b[33masa\u001b[0m\u001b[33m.n\u001b[0m\u001b[33mengu\u001b[0m\u001b[33mage\u001b[0m\u001b[33mcenter\u001b[0m\u001b[33m.org\u001b[0m\u001b[33m/topics\u001b[0m\u001b[33m/\u001b[0m\u001b[33mquant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m-com\u001b[0m\u001b[33mput\u001b[0m\u001b[33ming\u001b[0m\u001b[33m/\u001b[0m\u001b[33mquant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m-error\u001b[0m\u001b[33m-cor\u001b[0m\u001b[33mrection\u001b[0m\u001b[33m/)\n",
"\u001b[0m\u001b[33m-\u001b[0m\u001b[33m IBM\u001b[0m\u001b[33m Research\u001b[0m\u001b[33m:\u001b[0m\u001b[33m \"\u001b[0m\u001b[33mQuant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m Algorithms\u001b[0m\u001b[33m\"\u001b[0m\u001b[33m (\u001b[0m\u001b[33mhttps\u001b[0m\u001b[33m://\u001b[0m\u001b[33mwww\u001b[0m\u001b[33m.ibm\u001b[0m\u001b[33m.com\u001b[0m\u001b[33m/\u001b[0m\u001b[33mquant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m/com\u001b[0m\u001b[33mput\u001b[0m\u001b[33ming\u001b[0m\u001b[33m/\u001b[0m\u001b[33mdeveloper\u001b[0m\u001b[33m-resource\u001b[0m\u001b[33m/\u001b[0m\u001b[33mquant\u001b[0m\u001b[33mum\u001b[0m\u001b[33m-al\u001b[0m\u001b[33mgorithms\u001b[0m\u001b[33m/)\u001b[0m\u001b[97m\u001b[0m\n",
"\u001b[30m\u001b[0m"
]
}
],
"source": [
"async def create_search_agent(client: LlamaStackClient) -> Agent:\n",
" \"\"\"Create an agent with Brave Search capability.\"\"\"\n",
" search_tool = AgentConfigToolSearchToolDefinition(\n",
" type=\"brave_search\",\n",
" engine=\"brave\",\n",
" api_key=\"dummy_value\"#os.getenv(\"BRAVE_SEARCH_API_KEY\"),\n",
" )\n",
" \n",
" models_response = client.models.list()\n",
" for model in models_response:\n",
" if model.identifier.endswith(\"Instruct\"):\n",
" model_name = model.llama_model\n",
" \n",
"\n",
" return await create_tool_agent(\n",
" client=client,\n",
" tools=[search_tool],\n",
" model = model_name,\n",
" instructions=\"\"\"\n",
" You are a research assistant that can search the web.\n",
" Always cite your sources with URLs when providing information.\n",
" Format your responses as:\n",
"\n",
" FINDINGS:\n",
" [Your summary here]\n",
"\n",
" SOURCES:\n",
" - [Source title](URL)\n",
" \"\"\"\n",
" )\n",
"\n",
"# Example usage\n",
"async def search_example():\n",
" client = LlamaStackClient(base_url=f\"http://{HOST}:{PORT}\")\n",
" agent = await create_search_agent(client)\n",
"\n",
" # Create a session\n",
" session_id = agent.create_session(\"search-session\")\n",
"\n",
" # Example queries\n",
" queries = [\n",
" \"What are the latest developments in quantum computing?\",\n",
" #\"Who won the most recent Super Bowl?\",\n",
" ]\n",
"\n",
" for query in queries:\n",
" print(f\"\\nQuery: {query}\")\n",
" print(\"-\" * 50)\n",
"\n",
" response = agent.create_turn(\n",
" messages=[{\"role\": \"user\", \"content\": query}],\n",
" session_id=session_id,\n",
" )\n",
"\n",
" async for log in EventLogger().log(response):\n",
" log.print()\n",
"\n",
"# Run the example (in Jupyter, use asyncio.run())\n",
"await search_example()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Custom Tool Creation\n",
"\n",
"Let's create a custom weather tool:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Query: What's the weather like in San Francisco?\n",
"--------------------------------------------------\n"
]
},
{
"ename": "AttributeError",
"evalue": "'async_generator' object has no attribute 'completion_message'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[34], line 142\u001b[0m\n\u001b[1;32m 139\u001b[0m nest_asyncio\u001b[38;5;241m.\u001b[39mapply()\n\u001b[1;32m 141\u001b[0m \u001b[38;5;66;03m# Run the example\u001b[39;00m\n\u001b[0;32m--> 142\u001b[0m \u001b[38;5;28;01mawait\u001b[39;00m weather_example()\n",
"Cell \u001b[0;32mIn[34], line 133\u001b[0m, in \u001b[0;36mweather_example\u001b[0;34m()\u001b[0m\n\u001b[1;32m 126\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m-\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;241m*\u001b[39m \u001b[38;5;241m50\u001b[39m)\n\u001b[1;32m 128\u001b[0m response \u001b[38;5;241m=\u001b[39m agent\u001b[38;5;241m.\u001b[39mcreate_turn(\n\u001b[1;32m 129\u001b[0m messages\u001b[38;5;241m=\u001b[39m[{\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124muser\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m\"\u001b[39m: query}],\n\u001b[1;32m 130\u001b[0m session_id\u001b[38;5;241m=\u001b[39msession_id,\n\u001b[1;32m 131\u001b[0m )\n\u001b[0;32m--> 133\u001b[0m \u001b[38;5;28mprint\u001b[39m(response\u001b[38;5;241m.\u001b[39mcompletion_message\u001b[38;5;241m.\u001b[39mcontent)\n\u001b[1;32m 134\u001b[0m \u001b[38;5;28;01masync\u001b[39;00m \u001b[38;5;28;01mfor\u001b[39;00m log \u001b[38;5;129;01min\u001b[39;00m EventLogger()\u001b[38;5;241m.\u001b[39mlog(response):\n\u001b[1;32m 135\u001b[0m log\u001b[38;5;241m.\u001b[39mprint()\n",
"\u001b[0;31mAttributeError\u001b[0m: 'async_generator' object has no attribute 'completion_message'"
]
}
],
"source": [
"from typing import TypedDict, Optional, Dict, Any\n",
"from datetime import datetime\n",
"\n",
"from llama_stack_client.types.tool_param_definition_param import ToolParamDefinitionParam\n",
"from llama_stack_client.types import CompletionMessage\n",
"class WeatherTool:\n",
" \"\"\"Example custom tool for weather information.\"\"\"\n",
"\n",
" def get_name(self) -> str:\n",
" return \"get_weather\"\n",
"\n",
" def get_description(self) -> str:\n",
" return \"Get weather information for a location\"\n",
"\n",
" def get_params_definition(self) -> Dict[str, ToolParamDefinitionParam]:\n",
" return {\n",
" \"location\": ToolParamDefinitionParam(\n",
" param_type=\"str\",\n",
" description=\"City or location name\",\n",
" required=True\n",
" ),\n",
" \"date\": ToolParamDefinitionParam(\n",
" param_type=\"str\",\n",
" description=\"Optional date (YYYY-MM-DD)\",\n",
" required=False\n",
" )\n",
" }\n",
" async def run(self,messages):\n",
" for message in messages:\n",
" print(\"tool_calls starting: \", message.tool_calls[0])\n",
" return_dict = message.tool_calls[0].arguments\n",
" location = return_dict.get(\"location\",None)\n",
" date = return_dict.get(\"date\",None)\n",
" print(\"Using Location:\",location)\n",
" if date:\n",
" print(\"Using date:\",date)\n",
" return await self.run_impl(location,date)\n",
" \n",
" async def run_impl(self, location: str, date: Optional[str] = None) -> Dict[str, Any]:\n",
" \"\"\"Simulate getting weather data (replace with actual API call).\"\"\"\n",
" # Mock implementation\n",
" print(\"\"\" {\n",
" \"temperature\": 72.5,\n",
" \"conditions\": \"partly cloudy\",\n",
" \"humidity\": 65.0\n",
" }\"\"\")\n",
" return [CompletionMessage(\n",
" content=\"\"\"{\n",
" \"temperature\": 72.5,\n",
" \"conditions\": \"partly cloudy\",\n",
" \"humidity\": 65.0\n",
" }\"\"\",\n",
" role='assistant',\n",
" stop_reason='end_of_message',\n",
" tool_calls=[],\n",
" )],\n",
" \n",
"\n",
"async def create_weather_agent(client: LlamaStackClient) -> Agent:\n",
" \"\"\"Create an agent with weather tool capability.\"\"\"\n",
" models_response = client.models.list()\n",
" for model in models_response:\n",
" if model.identifier.endswith(\"Instruct\"):\n",
" model_name = model.llama_model\n",
" agent_config = AgentConfig(\n",
" model=model_name,\n",
" instructions=\"\"\"\n",
" You are a weather assistant that can provide weather information.\n",
" Always specify the location clearly in your responses.\n",
" Include both temperature and conditions in your summaries.\n",
" \"\"\",\n",
" sampling_params={\n",
" \"strategy\": \"greedy\",\n",
" \"temperature\": 1.0,\n",
" \"top_p\": 0.9,\n",
" },\n",
" tools=[\n",
" {\n",
" \"function_name\": \"get_weather\",\n",
" \"description\": \"Get weather information for a location\",\n",
" \"parameters\": {\n",
" \"location\": {\n",
" \"param_type\": \"str\",\n",
" \"description\": \"City or location name\",\n",
" \"required\": True,\n",
" },\n",
" \"date\": {\n",
" \"param_type\": \"str\",\n",
" \"description\": \"Optional date (YYYY-MM-DD)\",\n",
" \"required\": False,\n",
" },\n",
" },\n",
" \"type\": \"function_call\",\n",
" }\n",
" ],\n",
" tool_choice=\"auto\",\n",
" tool_prompt_format=\"json\",\n",
" input_shields=[],\n",
" output_shields=[],\n",
" enable_session_persistence=True\n",
" )\n",
"\n",
" # Create the agent with the tool\n",
" weather_tool = WeatherTool()\n",
" agent = Agent(\n",
" client=client,\n",
" agent_config=agent_config,\n",
" custom_tools=[weather_tool]\n",
" )\n",
"\n",
" return agent\n",
"\n",
"# Example usage\n",
"async def weather_example():\n",
" client = LlamaStackClient(base_url=f\"http://{HOST}:{PORT}\")\n",
" agent = await create_weather_agent(client)\n",
" session_id = agent.create_session(\"weather-session\")\n",
"\n",
" queries = [\n",
" \"What's the weather like in San Francisco?\",\n",
" \"Tell me the weather in Tokyo tomorrow\",\n",
" ]\n",
"\n",
" for query in queries:\n",
" print(f\"\\nQuery: {query}\")\n",
" print(\"-\" * 50)\n",
"\n",
" response = agent.create_turn(\n",
" messages=[{\"role\": \"user\", \"content\": query}],\n",
" session_id=session_id,\n",
" )\n",
" \n",
" async for log in EventLogger().log(response):\n",
" log.print()\n",
"\n",
"# For Jupyter notebooks\n",
"import nest_asyncio\n",
"nest_asyncio.apply()\n",
"\n",
"# Run the example\n",
"await weather_example()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Thanks for checking out this tutorial, hopefully you can now automate everything with Llama! :D\n",
"\n",
"Next up, we learn another hot topic of LLMs: Memory and Rag. Continue learning [here](./04_Memory101.ipynb)!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.15"
}
},
"nbformat": 4,
"nbformat_minor": 4
}