forked from phoenix-oss/llama-stack-mirror
# What does this PR do? User reports in https://github.com/meta-llama/llama-stack/issues/1769#issuecomment-2755564632 that Agent uses tool even on a prompt 'Hello'. Updated the default prompt. Also move the instruction part out of `function_description` so that user can override it if desired. ## Test Plan <img width="1344" alt="image" src="https://github.com/user-attachments/assets/c606d65d-071f-4211-a719-b4742676acda" /> Also performance on 100 hotpotqa questions are similar to the current prompt. |
||
|---|---|---|
| .. | ||
| llama3 | ||
| llama3_1 | ||
| llama3_2 | ||
| llama3_3 | ||
| llama4 | ||
| resources | ||
| __init__.py | ||
| checkpoint.py | ||
| datatypes.py | ||
| hadamard_utils.py | ||
| prompt_format.py | ||
| quantize_impls.py | ||
| sku_list.py | ||
| sku_types.py | ||