ehhuang
|
664161c462
|
fix: llama4 tool use prompt fix (#2103)
Tests:
LLAMA_STACK_CONFIG=http://localhost:5002 pytest -s -v
tests/integration/inference --safety-shield meta-llama/Llama-Guard-3-8B
--vision-model meta-llama/Llama-4-Scout-17B-16E-Instruct --text-model
meta-llama/Llama-4-Scout-17B-16E-Instruct
LLAMA_STACK_CONFIG=http://localhost:5002 pytest -s -v
tests/integration/inference --safety-shield meta-llama/Llama-Guard-3-8B
--vision-model Llama-4-Maverick-17B-128E-Instruct --text-model
Llama-4-Maverick-17B-128E-Instruct
Co-authored-by: Eric Huang <erichuang@fb.com>
|
2025-05-06 22:18:31 -07:00 |
|
ehhuang
|
0266b20535
|
docs: update prompt_format.md for llama4 (#2035)
torchrun --nproc_per_node=8 scripts/generate_prompt_format.py
meta-llama/Llama-4-Scout-17B-16E-Instruct ~/local/checkpoints/<path>/
llama_stack.models.llama.llama4.prompts
llama_stack/models/llama/llama4/prompt_format.md
Co-authored-by: Eric Huang <erichuang@fb.com>
|
2025-04-25 15:52:15 -07:00 |
|
Ashwin Bharambe
|
b8f1561956
|
feat: introduce llama4 support (#1877)
As title says. Details in README, elsewhere.
|
2025-04-05 11:53:35 -07:00 |
|