llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference
Ashwin Bharambe 540fc4d717
Fix Meta reference GPU implementation (#663)
By performing in-place mutations, we lost. Never in life do that.
2024-12-19 14:09:45 -08:00
..
quantization use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
config.py [4/n][torchtune integration] support lazy load model during inference (#620) 2024-12-18 16:30:53 -08:00
generation.py [4/n][torchtune integration] support lazy load model during inference (#620) 2024-12-18 16:30:53 -08:00
inference.py [4/n][torchtune integration] support lazy load model during inference (#620) 2024-12-18 16:30:53 -08:00
model_parallel.py Fix Meta reference GPU implementation (#663) 2024-12-19 14:09:45 -08:00
parallel_utils.py Update types in parallel_utils for meta-refernece-gpu impl 2024-12-19 13:58:41 -08:00