llama-stack-mirror/llama_stack/providers/impls/meta_reference/inference
Ashwin Bharambe cd39509e56 pre-commit
2024-10-25 12:58:17 -07:00
..
quantization Use enum.value to check against str 2024-10-25 12:53:32 -07:00
__init__.py Split off meta-reference-quantized provider 2024-10-10 16:03:19 -07:00
config.py Allow overridding checkpoint_dir via config 2024-10-18 14:28:06 -07:00
generation.py pre-commit 2024-10-25 12:58:17 -07:00
inference.py Add support for Structured Output / Guided decoding (#281) 2024-10-22 12:53:34 -07:00
model_parallel.py Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00
parallel_utils.py Make all methods async def again; add completion() for meta-reference (#270) 2024-10-18 20:50:59 -07:00