llama-stack-mirror/llama_stack/models/llama/llama4
Sébastien Han 2f91b73bcb
feat(pre-commit): enhance pre-commit hooks with additional checks
Add several new pre-commit hooks to improve code quality and security:

- no-commit-to-branch: prevent direct commits to protected branches like
  `main`
- check-yaml: validate YAML files
- detect-private-key: prevent accidental commit of private keys
- requirements-txt-fixer: maintain consistent requirements.txt format
  and sorting
- mixed-line-ending: enforce LF line endings to avoid mixed line endings
- check-executables-have-shebangs: ensure executable scripts have shebangs
- check-json: validate JSON files
- check-shebang-scripts-are-executable: verify shebang scripts are executable
- check-symlinks: validate symlinks and report broken ones
- check-toml: validate TOML files mainly for pyproject.toml

The respective fixes have been included.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-04-24 15:05:32 +02:00
..
quantization fix: on-the-fly int4 quantize parameter (#1920) 2025-04-09 15:00:12 -07:00
vision refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
__init__.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
args.py fix: Mirror llama4 rope scaling fixes, small model simplify (#1917) 2025-04-09 11:28:45 -07:00
chat_format.py fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
datatypes.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
ffn.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
generation.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
model.py fix: Mirror llama4 rope scaling fixes, small model simplify (#1917) 2025-04-09 11:28:45 -07:00
moe.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
preprocess.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
prompt_format.md feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
prompts.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
tokenizer.model feat(pre-commit): enhance pre-commit hooks with additional checks 2025-04-24 15:05:32 +02:00
tokenizer.py fix: unhide python_start, python_end 2025-04-11 20:30:44 -07:00