mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-30 07:39:38 +00:00
update getting started
This commit is contained in:
parent
c0c5839361
commit
9f76eab1e3
1 changed files with 13 additions and 29 deletions
|
@ -88,7 +88,6 @@ In the following steps, imagine we'll be working with a `Meta-Llama3.1-8B-Instru
|
|||
- `name`: the name for our distribution (e.g. `8b-instruct`)
|
||||
- `image_type`: our build image type (`conda | docker`)
|
||||
- `distribution_spec`: our distribution specs for specifying API providers
|
||||
- `distribution_type`: an unique name to identify our distribution. The available distributions can be found in [llama_toolchain/configs/distributions/distribution_registry](llama_toolchain/configs/distributions/distribution_registry/) folder in the form of YAML files. You can run `llama stack list-distributions` to see the available distributions.
|
||||
- `description`: a short description of the configurations for the distribution
|
||||
- `providers`: specifies the underlying implementation for serving each API endpoint
|
||||
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
|
||||
|
@ -96,32 +95,13 @@ In the following steps, imagine we'll be working with a `Meta-Llama3.1-8B-Instru
|
|||
#### Build a local distribution with conda
|
||||
The following command and specifications allows you to get started with building.
|
||||
```
|
||||
llama stack build
|
||||
llama stack build <path/to/config>
|
||||
```
|
||||
|
||||
You will be prompted to enter config specifications.
|
||||
```
|
||||
$ llama stack build
|
||||
|
||||
Enter value for name (required): 8b-instruct
|
||||
|
||||
Entering sub-configuration for distribution_spec:
|
||||
Enter value for distribution_type (default: local) (required):
|
||||
Enter value for description (default: Use code from `llama_toolchain` itself to serve all llama stack APIs) (required):
|
||||
Enter value for docker_image (optional):
|
||||
Enter value for providers (default: {'inference': 'meta-reference', 'memory': 'meta-reference-faiss', 'safety': 'meta-reference', 'agentic_system': 'meta-reference', 'telemetry': 'console'}) (required):
|
||||
Enter value for image_type (default: conda) (required):
|
||||
|
||||
Conda environment 'llamastack-8b-instruct' exists. Checking Python version...
|
||||
|
||||
Build spec configuration saved at ~/.llama/distributions/conda/8b-instruct-build.yaml
|
||||
```
|
||||
|
||||
After this step is complete, a file named `8b-instruct-build.yaml` will be generated and saved at `~/.llama/distributions/conda/8b-instruct-build.yaml`.
|
||||
- You will be required to pass in a file path to the build.config file (e.g. `./llama_toolchain/configs/distributions/conda/local-conda-example-build.yaml`). We provide some example build config files for configuring different types of distributions in the `./llama_toolchain/configs/distributions/` folder.
|
||||
|
||||
The file will be of the contents
|
||||
```
|
||||
$ cat ~/.llama/distributions/conda/8b-instruct-build.yaml
|
||||
$ cat ./llama_toolchain/configs/distributions/conda/local-conda-example-build.yaml
|
||||
|
||||
name: 8b-instruct
|
||||
distribution_spec:
|
||||
|
@ -137,11 +117,17 @@ distribution_spec:
|
|||
image_type: conda
|
||||
```
|
||||
|
||||
You may edit the `8b-instruct-build.yaml` file and re-run the `llama stack build` command to re-build and update the distribution.
|
||||
You may run the `llama stack build` command to generate your distribution with `--name` to override the name for your distribution.
|
||||
```
|
||||
llama stack build --config ~/.llama/distributions/conda/8b-instruct-build.yaml
|
||||
$ llama stack build ~/.llama/distributions/conda/8b-instruct-build.yaml --name 8b-instruct
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.llama/distributions/conda/8b-instruct-build.yaml
|
||||
```
|
||||
|
||||
After this step is complete, a file named `8b-instruct-build.yaml` will be generated and saved at `~/.llama/distributions/conda/8b-instruct-build.yaml`.
|
||||
|
||||
|
||||
#### How to build distribution with different API providers using configs
|
||||
To specify a different API provider, we can change the `distribution_spec` in our `<name>-build.yaml` config. For example, the following build spec allows you to build a distribution using TGI as the inference API provider.
|
||||
|
||||
|
@ -150,7 +136,6 @@ $ cat ./llama_toolchain/configs/distributions/conda/local-tgi-conda-example-buil
|
|||
|
||||
name: local-tgi-conda-example
|
||||
distribution_spec:
|
||||
distribution_type: local-plus-tgi-inference
|
||||
description: Use TGI (local or with Hugging Face Inference Endpoints for running LLM inference. When using HF Inference Endpoints, you must provide the name of the endpoint).
|
||||
docker_image: null
|
||||
providers:
|
||||
|
@ -177,7 +162,6 @@ $ cat ./llama_toolchain/configs/distributions/docker/local-docker-example-build.
|
|||
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
distribution_type: local
|
||||
description: Use code from `llama_toolchain` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue