llama-stack-mirror/docs
Sumanth Kamenani 577ec382e1
fix(docs): update Agents101 notebook for builtin websearch (#2591)
- Switch from BRAVE_SEARCH_API_KEY to TAVILY_SEARCH_API_KEY
- Add provider_data to LlamaStackClient for API key passing
- Use builtin::websearch toolgroup instead of manual tool config
- Fix message types to use UserMessage instead of plain dict
- Add streaming support with proper type casting
- Remove async from EventLogger loop (bug fix)

Fixes websearch functionality in agents tutorial by properly configuring
Tavily search provider integration.
# What does this PR do?

Fixes the Agents101 tutorial notebook to work with the current Llama
Stack websearch implementation. The tutorial was using outdated Brave
Search configuration that no longer works with the current server setup.

**Key Changes:**
- **Switch API provider**: Change from `BRAVE_SEARCH_API_KEY` to
`TAVILY_SEARCH_API_KEY` to match server configuration
- **Fix client setup**: Add `provider_data` to `LlamaStackClient` to
properly pass API keys to server
- **Modernize tool usage**: Replace manual tool configuration with
`tools=["builtin::websearch"]`
- **Fix type safety**: Use `UserMessage` type instead of plain
dictionaries for messages
- **Fix streaming**: Add proper streaming support with `stream=True` and
type casting
- **Fix EventLogger**: Remove incorrect `async for` usage (should be
`for`)

**Why needed:** Users following the tutorial were getting 401
Unauthorized errors because the notebook wasn't properly configured for
the Tavily search provider that the server actually uses.

## Test Plan

**Prerequisites:**
1. Start Llama Stack server with Ollama template and
`TAVILY_SEARCH_API_KEY` environment variable
2. Set `TAVILY_SEARCH_API_KEY` in your `.env` file

**Testing Steps:**
1. **Clone and setup:**
   ```bash
   git checkout fix-2558-update-agents101
   cd docs/zero_to_hero_guide/
   ```

2. **Start server with API key:**
   ```bash
   export TAVILY_SEARCH_API_KEY="your_tavily_api_key"
   podman run -it --network=host -v ~/.llama:/root/.llama:Z \
     --env INFERENCE_MODEL=$INFERENCE_MODEL \
     --env OLLAMA_URL=http://localhost:11434 \
     --env TAVILY_SEARCH_API_KEY=$TAVILY_SEARCH_API_KEY \
     llamastack/distribution-ollama --port $LLAMA_STACK_PORT
   ```

3. **Run the notebook:**
   - Open `07_Agents101.ipynb` in Jupyter
   - Execute all cells in order
- Cell 5 should run without errors and show successful web search
results

**Expected Results:**
-  No 401 Unauthorized errors
-  Agent successfully calls `brave_search.call()` with web results
-  Switzerland travel recommendations appear in output
-  Follow-up questions work correctly

**Before this fix:** Users got `401 Unauthorized` errors and tutorial
failed
**After this fix:** Tutorial works end-to-end with proper web search
functionality

**Tested with:**
- Tavily API key (free tier)
- Ollama distribution template  
- Llama-3.2-3B-Instruct model
2025-07-03 11:14:51 +02:00
..
_static feat: Add webmethod for deleting openai responses (#2160) 2025-06-30 11:28:02 +02:00
notebooks feat: Add Nvidia e2e beginner notebook and tool calling notebook (#1964) 2025-06-16 11:29:01 -04:00
openapi_generator feat: Add webmethod for deleting openai responses (#2160) 2025-06-30 11:28:02 +02:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source docs: update full list of providers with matched APIs and dockerhub images (#2452) 2025-07-03 10:12:56 +02:00
zero_to_hero_guide fix(docs): update Agents101 notebook for builtin websearch (#2591) 2025-07-03 11:14:51 +02:00
conftest.py fix: sleep after notebook test 2025-03-23 14:03:35 -07:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: remove last instances of code-interpreter provider (#2143) 2025-05-12 10:54:43 -07:00
getting_started_llama4.ipynb docs: llama4 getting started nb (#1878) 2025-04-06 18:51:34 -07:00
getting_started_llama_api.ipynb feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
readme.md chore: use groups when running commands (#2298) 2025-05-28 09:13:16 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Render locally

From the llama-stack root directory, run the following command to render the docs locally:

uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all

You can open up the docs in your browser at http://localhost:8000

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: