llama-stack/llama_stack/providers/inline/inference/meta_reference
ehhuang 7ed137e963
fix: meta ref inference (#2022)
MAX_BATCH_SIZE=10 LLAMA_MODELS_DEBUG=1 LLAMA_STACK_PORT=5002
LLAMA_STACK_LOGGING='all=info' llama stack run meta-reference-gpu --env
INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct --env
INFERENCE_CHECKPOINT_DIR=...

LLAMA_STACK_CONFIG=http://localhost:5002/ pytest -s -v
tests/integration/inference --safety-shield meta-llama/Llama-Guard-3-8B
--vision-model meta-llama/Llama-4-Scout-17B-16E-Instruct --text-model
meta-llama/Llama-4-Scout-17B-16E-Instruct

Co-authored-by: Eric Huang <erichuang@fb.com>
2025-04-24 13:03:35 -07:00
..
__init__.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
common.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
config.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
generators.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
inference.py fix: meta ref inference (#2022) 2025-04-24 13:03:35 -07:00
model_parallel.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
parallel_utils.py fix: meta ref inference (#2022) 2025-04-24 13:03:35 -07:00