llama-stack/llama_stack
ehhuang c9ab72fa82
Support sys_prompt behavior in inference (#937)
# What does this PR do?

The current default system prompt for llama3.2 tends to overindex on
tool calling and doesn't work well when the prompt does not require tool
calling.

This PR adds an option to override the default system prompt, and
organizes tool-related configs into a new config object.

- [ ] Addresses issue (#issue)


## Test Plan

python -m unittest
llama_stack.providers.tests.inference.test_prompt_adapter


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/meta-llama/llama-stack/pull/937).
* #938
* __->__ #937
2025-02-03 23:35:16 -08:00
..
apis Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
cli Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
distribution Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
providers Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
scripts Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
templates Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00