llama-stack/llama_stack/providers/inline/inference/meta_reference/quantization
Sébastien Han 6fa257b475
chore(lint): update Ruff ignores for project conventions and maintainability (#1184)
- Added new ignores from flake8-bugbear (`B007`, `B008`)
- Ignored `C901` (high function complexity) for now, pending review
- Maintained PyTorch conventions (`N812`, `N817`)
- Allowed `E731` (lambda assignments) for flexibility
- Consolidated existing ignores (`E402`, `E501`, `F405`, `C408`, `N812`)
- Documented rationale for each ignored rule

This keeps our linting aligned with project needs while tracking
potential fixes.

Signed-off-by: Sébastien Han <seb@redhat.com>

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-28 09:36:49 -08:00
..
scripts build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
fp8_impls.py build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
fp8_txest_disabled.py chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
hadamard_utils.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
loader.py fix: resolve type hint issues and import dependencies (#1176) 2025-02-25 11:06:47 -08:00