llama-stack-mirror/llama_stack/models/llama
Sébastien Han dc94433072
feat(pre-commit): enhance pre-commit hooks with additional checks (#2014)
# What does this PR do?

Add several new pre-commit hooks to improve code quality and security:

- no-commit-to-branch: prevent direct commits to protected branches like
`main`
- check-yaml: validate YAML files
- detect-private-key: prevent accidental commit of private keys
- requirements-txt-fixer: maintain consistent requirements.txt format
and sorting
- mixed-line-ending: enforce LF line endings to avoid mixed line endings
- check-executables-have-shebangs: ensure executable scripts have
shebangs
- check-json: validate JSON files
- check-shebang-scripts-are-executable: verify shebang scripts are
executable
- check-symlinks: validate symlinks and report broken ones
- check-toml: validate TOML files mainly for pyproject.toml

The respective fixes have been included.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-04-30 11:35:49 -07:00
..
llama3 fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
llama3_1 feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
llama3_2 refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
llama3_3 chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
llama4 feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
resources feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
__init__.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
checkpoint.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
datatypes.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
hadamard_utils.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
prompt_format.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
quantize_impls.py fix: on-the-fly int4 quantize parameter (#1920) 2025-04-09 15:00:12 -07:00
sku_list.py feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00
sku_types.py feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00