mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-07 14:26:44 +00:00
fix(docs): update Agents101 notebook for builtin websearch (#2591)
- Switch from BRAVE_SEARCH_API_KEY to TAVILY_SEARCH_API_KEY - Add provider_data to LlamaStackClient for API key passing - Use builtin::websearch toolgroup instead of manual tool config - Fix message types to use UserMessage instead of plain dict - Add streaming support with proper type casting - Remove async from EventLogger loop (bug fix) Fixes websearch functionality in agents tutorial by properly configuring Tavily search provider integration. # What does this PR do? Fixes the Agents101 tutorial notebook to work with the current Llama Stack websearch implementation. The tutorial was using outdated Brave Search configuration that no longer works with the current server setup. **Key Changes:** - **Switch API provider**: Change from `BRAVE_SEARCH_API_KEY` to `TAVILY_SEARCH_API_KEY` to match server configuration - **Fix client setup**: Add `provider_data` to `LlamaStackClient` to properly pass API keys to server - **Modernize tool usage**: Replace manual tool configuration with `tools=["builtin::websearch"]` - **Fix type safety**: Use `UserMessage` type instead of plain dictionaries for messages - **Fix streaming**: Add proper streaming support with `stream=True` and type casting - **Fix EventLogger**: Remove incorrect `async for` usage (should be `for`) **Why needed:** Users following the tutorial were getting 401 Unauthorized errors because the notebook wasn't properly configured for the Tavily search provider that the server actually uses. ## Test Plan **Prerequisites:** 1. Start Llama Stack server with Ollama template and `TAVILY_SEARCH_API_KEY` environment variable 2. Set `TAVILY_SEARCH_API_KEY` in your `.env` file **Testing Steps:** 1. **Clone and setup:** ```bash git checkout fix-2558-update-agents101 cd docs/zero_to_hero_guide/ ``` 2. **Start server with API key:** ```bash export TAVILY_SEARCH_API_KEY="your_tavily_api_key" podman run -it --network=host -v ~/.llama:/root/.llama:Z \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env OLLAMA_URL=http://localhost:11434 \ --env TAVILY_SEARCH_API_KEY=$TAVILY_SEARCH_API_KEY \ llamastack/distribution-ollama --port $LLAMA_STACK_PORT ``` 3. **Run the notebook:** - Open `07_Agents101.ipynb` in Jupyter - Execute all cells in order - Cell 5 should run without errors and show successful web search results **Expected Results:** - ✅ No 401 Unauthorized errors - ✅ Agent successfully calls `brave_search.call()` with web results - ✅ Switzerland travel recommendations appear in output - ✅ Follow-up questions work correctly **Before this fix:** Users got `401 Unauthorized` errors and tutorial failed **After this fix:** Tutorial works end-to-end with proper web search functionality **Tested with:** - Tavily API key (free tier) - Ollama distribution template - Llama-3.2-3B-Instruct model
This commit is contained in:
parent
040424acf5
commit
577ec382e1
1 changed files with 14 additions and 15 deletions
|
@ -45,7 +45,7 @@
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 1,
|
"execution_count": 14,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
@ -65,7 +65,7 @@
|
||||||
"from dotenv import load_dotenv\n",
|
"from dotenv import load_dotenv\n",
|
||||||
"\n",
|
"\n",
|
||||||
"load_dotenv()\n",
|
"load_dotenv()\n",
|
||||||
"BRAVE_SEARCH_API_KEY = os.environ[\"BRAVE_SEARCH_API_KEY\"]\n"
|
"TAVILY_SEARCH_API_KEY = os.environ[\"TAVILY_SEARCH_API_KEY\"]\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -110,10 +110,17 @@
|
||||||
"from llama_stack_client import LlamaStackClient\n",
|
"from llama_stack_client import LlamaStackClient\n",
|
||||||
"from llama_stack_client.lib.agents.agent import Agent\n",
|
"from llama_stack_client.lib.agents.agent import Agent\n",
|
||||||
"from llama_stack_client.lib.agents.event_logger import EventLogger\n",
|
"from llama_stack_client.lib.agents.event_logger import EventLogger\n",
|
||||||
|
"from llama_stack_client.types import UserMessage\n",
|
||||||
|
"from typing import cast, Iterator\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"async def agent_example():\n",
|
"async def agent_example():\n",
|
||||||
" client = LlamaStackClient(base_url=f\"http://{HOST}:{PORT}\")\n",
|
" client = LlamaStackClient(\n",
|
||||||
|
" base_url=f\"http://{HOST}:{PORT}\",\n",
|
||||||
|
" provider_data={\n",
|
||||||
|
" \"tavily_search_api_key\": TAVILY_SEARCH_API_KEY,\n",
|
||||||
|
" }\n",
|
||||||
|
" )\n",
|
||||||
" agent = Agent(\n",
|
" agent = Agent(\n",
|
||||||
" client,\n",
|
" client,\n",
|
||||||
" model=MODEL_NAME,\n",
|
" model=MODEL_NAME,\n",
|
||||||
|
@ -123,13 +130,7 @@
|
||||||
" \"type\": \"greedy\",\n",
|
" \"type\": \"greedy\",\n",
|
||||||
" },\n",
|
" },\n",
|
||||||
" },\n",
|
" },\n",
|
||||||
" tools=[\n",
|
" tools=[\"builtin::websearch\"],\n",
|
||||||
" {\n",
|
|
||||||
" \"type\": \"brave_search\",\n",
|
|
||||||
" \"engine\": \"brave\",\n",
|
|
||||||
" \"api_key\": BRAVE_SEARCH_API_KEY,\n",
|
|
||||||
" }\n",
|
|
||||||
" ],\n",
|
|
||||||
" )\n",
|
" )\n",
|
||||||
" session_id = agent.create_session(\"test-session\")\n",
|
" session_id = agent.create_session(\"test-session\")\n",
|
||||||
" print(f\"Created session_id={session_id} for Agent({agent.agent_id})\")\n",
|
" print(f\"Created session_id={session_id} for Agent({agent.agent_id})\")\n",
|
||||||
|
@ -142,15 +143,13 @@
|
||||||
" for prompt in user_prompts:\n",
|
" for prompt in user_prompts:\n",
|
||||||
" response = agent.create_turn(\n",
|
" response = agent.create_turn(\n",
|
||||||
" messages=[\n",
|
" messages=[\n",
|
||||||
" {\n",
|
" UserMessage(role=\"user\", content=prompt)\n",
|
||||||
" \"role\": \"user\",\n",
|
|
||||||
" \"content\": prompt,\n",
|
|
||||||
" }\n",
|
|
||||||
" ],\n",
|
" ],\n",
|
||||||
" session_id=session_id,\n",
|
" session_id=session_id,\n",
|
||||||
|
" stream=True,\n",
|
||||||
" )\n",
|
" )\n",
|
||||||
"\n",
|
"\n",
|
||||||
" async for log in EventLogger().log(response):\n",
|
" for log in EventLogger().log(cast(Iterator, response)):\n",
|
||||||
" log.print()\n",
|
" log.print()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue