forked from phoenix-oss/llama-stack-mirror
Mirror to https://github.com/meta-llama/llama-models/pull/324 with some clean up ``` with-proxy pip install -e . export INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct export INFERENCE_CHECKPOINT_DIR=../checkpoints/Llama-4-Scout-17B-16E-Instruct export QUANTIZATION_TYPE=int4_mixed with-proxy llama stack build --run --template meta-reference-gpu ``` # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) |
||
|---|---|---|
| .. | ||
| quantization | ||
| vision | ||
| __init__.py | ||
| args.py | ||
| chat_format.py | ||
| datatypes.py | ||
| ffn.py | ||
| generation.py | ||
| model.py | ||
| moe.py | ||
| preprocess.py | ||
| prompt_format.md | ||
| prompts.py | ||
| tokenizer.model | ||
| tokenizer.py | ||