llama-stack-mirror/toolchain/inference/quantization
Ashwin Bharambe ad62e2e1f3 make inference server load checkpoints for fp8 inference
- introduce quantization related args for inference config
- also kill GeneratorArgs
2024-07-20 22:54:48 -07:00
..
build_conda.sh Add toolchain from agentic system here 2024-07-19 12:30:35 -07:00
fp8_impls.py make inference server load checkpoints for fp8 inference 2024-07-20 22:54:48 -07:00
fp8_requirements.txt Add toolchain from agentic system here 2024-07-19 12:30:35 -07:00
generation.py Add toolchain from agentic system here 2024-07-19 12:30:35 -07:00
loader.py make inference server load checkpoints for fp8 inference 2024-07-20 22:54:48 -07:00
model.py make inference server load checkpoints for fp8 inference 2024-07-20 22:54:48 -07:00
quantize_checkpoint.py Add toolchain from agentic system here 2024-07-19 12:30:35 -07:00
run_quantize_checkpoint.sh Add toolchain from agentic system here 2024-07-19 12:30:35 -07:00
test_fp8.py make inference server load checkpoints for fp8 inference 2024-07-20 22:54:48 -07:00