llama-stack-mirror/llama_stack
ehhuang 0266b20535
docs: update prompt_format.md for llama4 (#2035)
torchrun --nproc_per_node=8 scripts/generate_prompt_format.py
meta-llama/Llama-4-Scout-17B-16E-Instruct ~/local/checkpoints/<path>/
llama_stack.models.llama.llama4.prompts
llama_stack/models/llama/llama4/prompt_format.md

Co-authored-by: Eric Huang <erichuang@fb.com>
2025-04-25 15:52:15 -07:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat(cli): add interactive tab completion for image type selection (#2027) 2025-04-25 16:57:42 +02:00
distribution fix: add endpoint route debugs 2025-04-25 10:40:12 -07:00
models docs: update prompt_format.md for llama4 (#2035) 2025-04-25 15:52:15 -07:00
providers fix: Correctly parse algorithm_config when launching NVIDIA customization job; fix internal request handler (#2025) 2025-04-25 13:21:50 -07:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: Fix missing --gpu all flag in Docker run commands (#2026) 2025-04-25 12:17:31 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00