llama-stack-mirror/llama_stack/providers/impls/meta_reference/inference
2024-10-18 12:46:31 -07:00
..
quantization Fix fp8 implementation which had bit-rotten a bit 2024-10-15 13:57:01 -07:00
__init__.py Split off meta-reference-quantized provider 2024-10-10 16:03:19 -07:00
config.py Rename config var 2024-10-18 12:46:31 -07:00
generation.py Fix fp8 implementation which had bit-rotten a bit 2024-10-15 13:57:01 -07:00
inference.py Rename config var 2024-10-18 12:46:31 -07:00
model_parallel.py Split off meta-reference-quantized provider 2024-10-10 16:03:19 -07:00
parallel_utils.py Split off meta-reference-quantized provider 2024-10-10 16:03:19 -07:00