llama-stack/llama_stack/models/llama/llama4
Jiawen Liu 36a31fe5dd
fix: on-the-fly int4 quantize parameter (#1920)
Mirror to https://github.com/meta-llama/llama-models/pull/324 with some
clean up

```
with-proxy pip install -e .
export INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct
export INFERENCE_CHECKPOINT_DIR=../checkpoints/Llama-4-Scout-17B-16E-Instruct
export QUANTIZATION_TYPE=int4_mixed
with-proxy llama stack build --run --template meta-reference-gpu
```

# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)
2025-04-09 15:00:12 -07:00
..
quantization fix: on-the-fly int4 quantize parameter (#1920) 2025-04-09 15:00:12 -07:00
vision refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
__init__.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
args.py fix: Mirror llama4 rope scaling fixes, small model simplify (#1917) 2025-04-09 11:28:45 -07:00
chat_format.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
datatypes.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
ffn.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
generation.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
model.py fix: Mirror llama4 rope scaling fixes, small model simplify (#1917) 2025-04-09 11:28:45 -07:00
moe.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
preprocess.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
prompt_format.md feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
prompts.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
tokenizer.model feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
tokenizer.py fix: meta reference + llama4 tokenizer fix 2025-04-09 00:46:32 -07:00