llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference
Xi Yan 6be563434e
[remove import *][2/n] remove rest of import * in implementations (#690)
# What does this PR do?

- see https://github.com/meta-llama/llama-stack/pull/689
<img width="591" alt="image"
src="https://github.com/user-attachments/assets/76946a67-7373-43b5-8a03-0ad201aa543b"
/>

- leaving `tools/builtin.py` to avoid conflicts


## Test Plan

- see https://github.com/meta-llama/llama-stack/pull/689

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-27 15:32:04 -08:00
..
quantization use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
config.py [remove import *][2/n] remove rest of import * in implementations (#690) 2024-12-27 15:32:04 -08:00
generation.py [remove import *][2/n] remove rest of import * in implementations (#690) 2024-12-27 15:32:04 -08:00
inference.py [4/n][torchtune integration] support lazy load model during inference (#620) 2024-12-18 16:30:53 -08:00
model_parallel.py Fix Meta reference GPU implementation (#663) 2024-12-19 14:09:45 -08:00
parallel_utils.py Update types in parallel_utils for meta-refernece-gpu impl 2024-12-19 13:58:41 -08:00