mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
Tests: LLAMA_STACK_CONFIG=http://localhost:5002 pytest -s -v tests/integration/inference --safety-shield meta-llama/Llama-Guard-3-8B --vision-model meta-llama/Llama-4-Scout-17B-16E-Instruct --text-model meta-llama/Llama-4-Scout-17B-16E-Instruct LLAMA_STACK_CONFIG=http://localhost:5002 pytest -s -v tests/integration/inference --safety-shield meta-llama/Llama-Guard-3-8B --vision-model Llama-4-Maverick-17B-128E-Instruct --text-model Llama-4-Maverick-17B-128E-Instruct Co-authored-by: Eric Huang <erichuang@fb.com> |
||
---|---|---|
.. | ||
prompt_templates | ||
quantization | ||
vision | ||
__init__.py | ||
args.py | ||
chat_format.py | ||
datatypes.py | ||
ffn.py | ||
generation.py | ||
model.py | ||
moe.py | ||
preprocess.py | ||
prompt_format.md | ||
prompts.py | ||
tokenizer.model | ||
tokenizer.py |