mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-28 15:02:37 +00:00
update README a bit
This commit is contained in:
parent
d73fed5cc3
commit
7c9ed3e58e
1 changed files with 14 additions and 1 deletions
15
README.md
15
README.md
|
@ -8,12 +8,25 @@ The tool chain apis that are covered --
|
|||
- synthetic data generation
|
||||
|
||||
|
||||
## Running FP8
|
||||
|
||||
You need `fbgemm-gpu` package which requires torch >= 2.4.0 (currently only in nightly, but releasing shortly...).
|
||||
|
||||
```bash
|
||||
ENV=fp8_env
|
||||
conda create -n $ENV python=3.10
|
||||
conda activate $ENV
|
||||
|
||||
pip3 install -r fp8_requirements.txt
|
||||
```
|
||||
|
||||
|
||||
### Generate OpenAPI specs
|
||||
|
||||
Set up virtual environment
|
||||
|
||||
```
|
||||
python3.9 -m venv ~/.venv/toolchain/
|
||||
python3 -m venv ~/.venv/toolchain/
|
||||
source ~/.venv/toolchain/bin/activate
|
||||
|
||||
with-proxy pip3 install -r requirements.txt
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue