refactor: move llama3 impl to meta_reference provider (#1364)

Just moving bits to a better place

## Test Plan

```bash
torchrun $CONDA_PREFIX/bin/pytest -s -v test_text_inference.py
```
This commit is contained in:
Ashwin Bharambe 2025-03-03 13:22:57 -08:00 committed by GitHub
parent af396e3809
commit 725423c95c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
10 changed files with 7 additions and 9 deletions

View file

@ -24,9 +24,9 @@ from fairscale.nn.model_parallel.initialize import (
)
from torch.nn.parameter import Parameter
from llama_stack.models.llama.llama3.args import ModelArgs
from llama_stack.models.llama.llama3.model import Transformer, TransformerBlock
from llama_stack.models.llama.llama3.tokenizer import Tokenizer
from llama_stack.providers.inline.inference.meta_reference.llama3.args import ModelArgs
from llama_stack.providers.inline.inference.meta_reference.llama3.model import Transformer, TransformerBlock
from llama_stack.providers.inline.inference.meta_reference.quantization.fp8_impls import (
quantize_fp8,
)