From f436348124bf732ed38fc23a2379d3f0b2187a29 Mon Sep 17 00:00:00 2001 From: Suraj Subramanian Date: Mon, 7 Apr 2025 11:29:03 -0700 Subject: [PATCH] regen markdown --- llama_stack/models/llama/llama4/prompt_format.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/llama_stack/models/llama/llama4/prompt_format.md b/llama_stack/models/llama/llama4/prompt_format.md index 9deff1d1c..6774b720d 100644 --- a/llama_stack/models/llama/llama4/prompt_format.md +++ b/llama_stack/models/llama/llama4/prompt_format.md @@ -22,7 +22,7 @@ There are 4 different roles that are supported by Llama 4 - `system`: Sets the context in which to interact with the AI model. It typically includes rules, guidelines, or necessary information that helps the model respond effectively. - `user`: Represents the human interacting with the model. It includes the inputs, commands, and questions to the model. - `assistant`: Represents the response generated by the AI model based on the context provided in the `system`, `tool` and `user` prompts. -- `tool`: Represents the output of a tool call when sent back to the model from the executor. (The actual token used by the model is `<|ipython|>`.) +- `tool`: Represents the output of a tool call when sent back to the model from the executor. Note that the role name used in the prompt template is `ipython`; scroll down to the last example to see how this is used. # Llama 4 Instruct Model @@ -319,7 +319,7 @@ The top 2 latest trending songs are: ##### Notes -- Tool outputs should be passed back to the model in the `tool` role, which uses the `<|ipython|>` tag. +- Tool outputs should be passed back to the model in the `tool` (a.k.a. `ipython`) role. - The model parses the tool output contents until it encounters the `<|eom|>` tag. It uses this to synthesize an appropriate response to the query.