mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-03 09:21:45 +00:00
regen markdown
This commit is contained in:
parent
18c68c4950
commit
f436348124
1 changed files with 2 additions and 2 deletions
|
@ -22,7 +22,7 @@ There are 4 different roles that are supported by Llama 4
|
|||
- `system`: Sets the context in which to interact with the AI model. It typically includes rules, guidelines, or necessary information that helps the model respond effectively.
|
||||
- `user`: Represents the human interacting with the model. It includes the inputs, commands, and questions to the model.
|
||||
- `assistant`: Represents the response generated by the AI model based on the context provided in the `system`, `tool` and `user` prompts.
|
||||
- `tool`: Represents the output of a tool call when sent back to the model from the executor. (The actual token used by the model is `<|ipython|>`.)
|
||||
- `tool`: Represents the output of a tool call when sent back to the model from the executor. Note that the role name used in the prompt template is `ipython`; scroll down to the last example to see how this is used.
|
||||
|
||||
|
||||
# Llama 4 Instruct Model
|
||||
|
@ -319,7 +319,7 @@ The top 2 latest trending songs are:
|
|||
|
||||
##### Notes
|
||||
|
||||
- Tool outputs should be passed back to the model in the `tool` role, which uses the `<|ipython|>` tag.
|
||||
- Tool outputs should be passed back to the model in the `tool` (a.k.a. `ipython`) role.
|
||||
- The model parses the tool output contents until it encounters the `<|eom|>` tag. It uses this to synthesize an appropriate response to the query.
|
||||
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue