mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
# What does this PR do? Move around bits. This makes the copies from llama-models _much_ easier to maintain and ensures we don't entangle meta-reference specific tidbits into llama-models code even by accident. Also, kills the meta-reference-quantized-gpu distro and rolls quantization deps into meta-reference-gpu. ## Test Plan ``` LLAMA_MODELS_DEBUG=1 \ with-proxy llama stack run meta-reference-gpu \ --env INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct \ --env INFERENCE_CHECKPOINT_DIR=<DIR> \ --env MODEL_PARALLEL_SIZE=4 \ --env QUANTIZATION_TYPE=fp8_mixed ``` Start a server with and without quantization. Point integration tests to it using: ``` pytest -s -v tests/integration/inference/test_text_inference.py \ --stack-config http://localhost:8321 --text-model meta-llama/Llama-4-Scout-17B-16E-Instruct ``` |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
ci-tests | ||
dell | ||
dev | ||
experimental-post-training | ||
fireworks | ||
groq | ||
hf-endpoint | ||
hf-serverless | ||
meta-reference-gpu | ||
nvidia | ||
ollama | ||
open-benchmark | ||
passthrough | ||
remote-vllm | ||
sambanova | ||
tgi | ||
together | ||
vllm-gpu | ||
__init__.py | ||
dependencies.json | ||
template.py |