fix readmes

This commit is contained in:
Xi Yan 2024-10-25 12:49:33 -07:00
parent 474101a9f7
commit 6100b02ff5
6 changed files with 10 additions and 14 deletions

View file

@ -49,7 +49,7 @@ inference:
**Via Conda** **Via Conda**
```bash ```bash
llama stack build --config ./build.yaml llama stack build --template fireworks --image-type conda
# -- modify run.yaml to a valid Fireworks server endpoint # -- modify run.yaml to a valid Fireworks server endpoint
llama stack run ./run.yaml llama stack run ./run.yaml
``` ```

View file

@ -86,6 +86,6 @@ inference:
**Via Conda** **Via Conda**
``` ```
llama stack build --config ./build.yaml llama stack build --template ollama --image-type conda
llama stack run ./gpu/run.yaml llama stack run ./gpu/run.yaml
``` ```

View file

@ -88,7 +88,7 @@ inference:
**Via Conda** **Via Conda**
```bash ```bash
llama stack build --config ./build.yaml llama stack build --template tgi --image-type conda
# -- start a TGI server endpoint # -- start a TGI server endpoint
llama stack run ./gpu/run.yaml llama stack run ./gpu/run.yaml
``` ```

View file

@ -62,7 +62,7 @@ memory:
**Via Conda** **Via Conda**
```bash ```bash
llama stack build --config ./build.yaml llama stack build --template together --image-type conda
# -- modify run.yaml to a valid Together server endpoint # -- modify run.yaml to a valid Together server endpoint
llama stack run ./run.yaml llama stack run ./run.yaml
``` ```

View file

@ -279,11 +279,11 @@ llama stack build --list-templates
You may then pick a template to build your distribution with providers fitted to your liking. You may then pick a template to build your distribution with providers fitted to your liking.
``` ```
llama stack build --template local-tgi --name my-tgi-stack llama stack build --template local-tgi --name my-tgi-stack --image-type conda
``` ```
``` ```
$ llama stack build --template local-tgi --name my-tgi-stack $ llama stack build --template local-tgi --name my-tgi-stack --image-type conda
... ...
... ...
Build spec configuration saved at ~/.conda/envs/llamastack-my-tgi-stack/my-tgi-stack-build.yaml Build spec configuration saved at ~/.conda/envs/llamastack-my-tgi-stack/my-tgi-stack-build.yaml
@ -293,10 +293,10 @@ You may now run `llama stack configure my-tgi-stack` or `llama stack configure ~
#### Building from config file #### Building from config file
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command. - In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
- The config file will be of contents like the ones in `llama_stack/distributions/templates/`. - The config file will be of contents like the ones in `llama_stack/templates/`.
``` ```
$ cat llama_stack/distribution/templates/local-ollama-build.yaml $ cat build.yaml
name: local-ollama name: local-ollama
distribution_spec: distribution_spec:
@ -311,7 +311,7 @@ image_type: conda
``` ```
``` ```
llama stack build --config llama_stack/distribution/templates/local-ollama-build.yaml llama stack build --config build.yaml
``` ```
#### How to build distribution with Docker image #### How to build distribution with Docker image

View file

@ -35,11 +35,7 @@ You have two ways to start up Llama stack server:
1. **Starting up server via docker**: 1. **Starting up server via docker**:
We provide 2 pre-built Docker image of Llama Stack distribution, which can be found in the following links. We provide pre-built Docker image of Llama Stack distribution, which can be found in the following links in the [distributions](../distributions/) folder.
- [llamastack-local-gpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-gpu/general)
- This is a packaged version with our local meta-reference implementations, where you will be running inference locally with downloaded Llama model checkpoints.
- [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general)
- This is a lite version with remote inference where you can hook up to your favourite remote inference framework (e.g. ollama, fireworks, together, tgi) for running inference without GPU.
> [!NOTE] > [!NOTE]
> For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container. > For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.