llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference/quantization
Ashwin Bharambe 725423c95c
refactor: move llama3 impl to meta_reference provider (#1364)
Just moving bits to a better place

## Test Plan

```bash
torchrun $CONDA_PREFIX/bin/pytest -s -v test_text_inference.py
```
2025-03-03 13:22:57 -08:00
..
scripts refactor: move llama3 impl to meta_reference provider (#1364) 2025-03-03 13:22:57 -08:00
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
fp8_impls.py build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
fp8_txest_disabled.py chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
hadamard_utils.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
loader.py refactor: move llama3 impl to meta_reference provider (#1364) 2025-03-03 13:22:57 -08:00