llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference
2025-04-07 11:30:12 -07:00
..
__init__.py fold in meta-reference-quantized 2025-04-07 11:30:12 -07:00
common.py refactor: move generation.py to llama3 2025-03-03 13:46:50 -08:00
config.py fold in meta-reference-quantized 2025-04-07 11:30:12 -07:00
generators.py fold in meta-reference-quantized 2025-04-07 11:30:12 -07:00
inference.py several fixes 2025-04-07 10:31:20 -07:00
model_parallel.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
parallel_utils.py fix: avoid tensor memory error (#1688) 2025-03-18 16:17:29 -07:00