From 9ff6ceef45ceb786d73b23f518c73c947204a34d Mon Sep 17 00:00:00 2001 From: Xi Yan Date: Mon, 3 Mar 2025 16:08:58 -0800 Subject: [PATCH] header size --- .../Llama_Stack_Agent_Workflows.ipynb | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/notebooks/Llama_Stack_Agent_Workflows.ipynb b/docs/notebooks/Llama_Stack_Agent_Workflows.ipynb index c922c735d..9b40e9b20 100644 --- a/docs/notebooks/Llama_Stack_Agent_Workflows.ipynb +++ b/docs/notebooks/Llama_Stack_Agent_Workflows.ipynb @@ -1798,6 +1798,15 @@ " print(\"\\n\")\n" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 1.3.1 Monitor Parallelization Internals\n", + "\n", + "Now, let's see how the worker agents processed the tasks. " + ] + }, { "cell_type": "code", "execution_count": 126, @@ -2223,16 +2232,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### 1.3.1 Monitor Parallelization Internals\n", - "\n", - "Now, let's see how the worker agents processed the tasks. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### 2. Evaluator-Optimizer Workflow\n", + "## 2. Evaluator-Optimizer Workflow\n", "\n", "In the evaluator-optimizer workflow, one LLM call generates a response while another provider evaluation and feedback in a loop. \n", "\n", @@ -2399,7 +2399,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "#### 2.1. Monitor Generator-Evaluator Internals\n", + "### 2.1. Monitor Generator-Evaluator Internals\n", "\n", "In addition to final output from workflow, we can also look at how the generator and evaluator agents processed the user's request. Note that the `evaluator_agent` PASSED after 1 iteration. " ]