llama-stack-mirror/toolchain/inference/quantization/fp8_requirements.txt
2024-07-19 12:30:35 -07:00

5 lines
54 B
Text

fairscale
fire
tiktoken
blobfile
fbgemm-gpu==0.8.0rc4