forked from phoenix-oss/llama-stack-mirror
[docs] update documentations (#356)
* move docs -> source * Add files via upload * mv image * Add files via upload * colocate iOS setup doc * delete image * Add files via upload * fix * delete image * Add files via upload * Update developer_cookbook.md * toctree * wip subfolder * docs update * subfolder * updates * name * updates * index * updates * refactor structure * depth * docs * content * docs * getting started * distributions * fireworks * fireworks * update * theme * theme * theme * pdj theme * pytorch theme * css * theme * agents example * format * index * headers * copy button * test tabs * test tabs * fix * tabs * tab * tabs * sphinx_design * quick start commands * size * width * css * css * download models * asthetic fix * tab format * update * css * width * css * docs * tab based * tab * tabs * docs * style * image * css * color * typo * update docs * missing links * list templates * links * links update * troubleshooting * fix * distributions * docs * fix table * kill llamastack-local-gpu/cpu * Update index.md * Update index.md * mv ios_setup.md * Update ios_setup.md * Add remote_or_local.gif * Update ios_setup.md * release notes * typos * Add ios_setup to index * nav bar * hide torctree * ios image * links update * rename * rename * docs * rename * links * distributions * distributions * distributions * distributions * remove release * remote --------- Co-authored-by: dltn <6599399+dltn@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
This commit is contained in:
parent
ac93dd89cf
commit
c810a4184d
37 changed files with 1777 additions and 2154 deletions
32
README.md
32
README.md
|
@ -6,6 +6,8 @@
|
|||
[](https://pypi.org/project/llama-stack/)
|
||||
[](https://discord.gg/llama-stack)
|
||||
|
||||
[**Get Started**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html)
|
||||
|
||||
This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions.
|
||||
|
||||
The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to building and running AI agents in production. Beyond definition, we are building providers for the Llama Stack APIs. These were developing open-source versions and partnering with providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.
|
||||
|
@ -44,8 +46,6 @@ A Distribution is where APIs and Providers are assembled together to provide a c
|
|||
|
||||
## Supported Llama Stack Implementations
|
||||
### API Providers
|
||||
|
||||
|
||||
| **API Provider Builder** | **Environments** | **Agents** | **Inference** | **Memory** | **Safety** | **Telemetry** |
|
||||
| :----: | :----: | :----: | :----: | :----: | :----: | :----: |
|
||||
| Meta Reference | Single Node | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
|
@ -59,13 +59,15 @@ A Distribution is where APIs and Providers are assembled together to provide a c
|
|||
| PyTorch ExecuTorch | On-device iOS | :heavy_check_mark: | :heavy_check_mark: | | |
|
||||
|
||||
### Distributions
|
||||
| **Distribution Provider** | **Docker** | **Inference** | **Memory** | **Safety** | **Telemetry** |
|
||||
| :----: | :----: | :----: | :----: | :----: | :----: |
|
||||
| Meta Reference | [Local GPU](https://hub.docker.com/repository/docker/llamastack/llamastack-local-gpu/general), [Local CPU](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Dell-TGI | [Local TGI + Chroma](https://hub.docker.com/repository/docker/llamastack/llamastack-local-tgi-chroma/general) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
|
||||
|
||||
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: |
|
||||
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
## Installation
|
||||
|
||||
You have two ways to install this repository:
|
||||
|
@ -92,21 +94,15 @@ You have two ways to install this repository:
|
|||
|
||||
## Documentations
|
||||
|
||||
The `llama` CLI makes it easy to work with the Llama Stack set of tools. Please find the following docs for details.
|
||||
Please checkout our [Documentations](https://llama-stack.readthedocs.io/en/latest/index.html) page for more details.
|
||||
|
||||
* [CLI reference](docs/cli_reference.md)
|
||||
* [CLI reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
||||
* Guide using `llama` CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.
|
||||
* [Getting Started](docs/getting_started.md)
|
||||
* [Getting Started](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html)
|
||||
* Quick guide to start a Llama Stack server.
|
||||
* [Jupyter notebook](./docs/getting_started.ipynb) to walk-through how to use simple text and vision inference llama_stack_client APIs
|
||||
* [Building a Llama Stack Distribution](docs/building_distro.md)
|
||||
* Guide to build a Llama Stack distribution
|
||||
* [Distributions](./distributions/)
|
||||
* References to start Llama Stack distributions backed with different API providers.
|
||||
* [Developer Cookbook](./docs/developer_cookbook.md)
|
||||
* References to guides to help you get started based on your developer needs.
|
||||
* [Contributing](CONTRIBUTING.md)
|
||||
* [Adding a new API Provider](./docs/new_api_provider.md) to walk-through how to add a new API provider.
|
||||
* [Adding a new API Provider](https://llama-stack.readthedocs.io/en/latest/api_providers/new_api_provider.html) to walk-through how to add a new API provider.
|
||||
|
||||
## Llama Stack Client SDK
|
||||
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
# Llama Stack Distribution
|
||||
|
||||
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
||||
|
||||
|
||||
## Quick Start Llama Stack Distributions Guide
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: |
|
||||
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](./meta-reference-gpu/) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](./meta-reference-quantized-gpu/) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](./ollama/) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | remote::ollama | meta-reference |
|
||||
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](./tgi/) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](./together/) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](./fireworks/) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
|
@ -1,102 +0,0 @@
|
|||
# Meta Reference Distribution
|
||||
|
||||
The `llamastack/distribution-meta-reference-gpu` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | meta-reference | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Start the Distribution (Single Node GPU)
|
||||
|
||||
```
|
||||
$ cd distributions/meta-reference-gpu
|
||||
$ ls
|
||||
build.yaml compose.yaml README.md run.yaml
|
||||
$ docker compose up
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have access to GPU to start a local server with access to your GPU.
|
||||
|
||||
|
||||
> [!NOTE]
|
||||
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
||||
|
||||
|
||||
This will download and start running a pre-built docker container. Alternatively, you may use the following commands:
|
||||
|
||||
```
|
||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
||||
### Alternative (Build and start distribution locally via conda)
|
||||
- You may checkout the [Getting Started](../../docs/getting_started.md) for more details on building locally via conda and starting up a meta-reference distribution.
|
||||
|
||||
### Start Distribution With pgvector/chromadb Memory Provider
|
||||
##### pgvector
|
||||
1. Start running the pgvector server:
|
||||
|
||||
```
|
||||
docker run --network host --name mypostgres -it -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgres pgvector/pgvector:pg16
|
||||
```
|
||||
|
||||
2. Edit the `run.yaml` file to point to the pgvector server.
|
||||
```
|
||||
memory:
|
||||
- provider_id: pgvector
|
||||
provider_type: remote::pgvector
|
||||
config:
|
||||
host: 127.0.0.1
|
||||
port: 5432
|
||||
db: postgres
|
||||
user: postgres
|
||||
password: mysecretpassword
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you get a `RuntimeError: Vector extension is not installed.`. You will need to run `CREATE EXTENSION IF NOT EXISTS vector;` to include the vector extension. E.g.
|
||||
|
||||
```
|
||||
docker exec -it mypostgres ./bin/psql -U postgres
|
||||
postgres=# CREATE EXTENSION IF NOT EXISTS vector;
|
||||
postgres=# SELECT extname from pg_extension;
|
||||
extname
|
||||
```
|
||||
|
||||
3. Run `docker compose up` with the updated `run.yaml` file.
|
||||
|
||||
##### chromadb
|
||||
1. Start running chromadb server
|
||||
```
|
||||
docker run -it --network host --name chromadb -p 6000:6000 -v ./chroma_vdb:/chroma/chroma -e IS_PERSISTENT=TRUE chromadb/chroma:latest
|
||||
```
|
||||
|
||||
2. Edit the `run.yaml` file to point to the chromadb server.
|
||||
```
|
||||
memory:
|
||||
- provider_id: remote::chromadb
|
||||
provider_type: remote::chromadb
|
||||
config:
|
||||
host: localhost
|
||||
port: 6000
|
||||
```
|
||||
|
||||
3. Run `docker compose up` with the updated `run.yaml` file.
|
||||
|
||||
### Serving a new model
|
||||
You may change the `config.model` in `run.yaml` to update the model currently being served by the distribution. Make sure you have the model checkpoint downloaded in your `~/.llama`.
|
||||
```
|
||||
inference:
|
||||
- provider_id: meta0
|
||||
provider_type: meta-reference
|
||||
config:
|
||||
model: Llama3.2-11B-Vision-Instruct
|
||||
quantization: null
|
||||
torch_seed: null
|
||||
max_seq_len: 4096
|
||||
max_batch_size: 1
|
||||
```
|
||||
|
||||
Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
|
@ -17,7 +17,7 @@ services:
|
|||
depends_on:
|
||||
text-generation-inference:
|
||||
condition: service_healthy
|
||||
image: llamastack/llamastack-local-cpu
|
||||
image: llamastack/llamastack-tgi
|
||||
network_mode: "host"
|
||||
volumes:
|
||||
- ~/.llama:/root/.llama
|
||||
|
|
|
@ -11,7 +11,7 @@ The `llamastack/distribution-together` distribution consists of the following pr
|
|||
| **Provider(s)** | remote::together | meta-reference | meta-reference, remote::weaviate | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Start the Distribution (Single Node CPU)
|
||||
### Docker: Start the Distribution (Single Node CPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted endpoint at Together with API Key.
|
||||
|
@ -33,23 +33,7 @@ inference:
|
|||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### (Alternative) llama stack run (Single Node CPU)
|
||||
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-together --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
||||
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Together URL server endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: together
|
||||
provider_type: remote::together
|
||||
config:
|
||||
url: https://api.together.xyz/v1
|
||||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
**Via Conda**
|
||||
### Conda llama stack run (Single Node CPU)
|
||||
|
||||
```bash
|
||||
llama stack build --template together --image-type conda
|
||||
|
@ -57,7 +41,7 @@ llama stack build --template together --image-type conda
|
|||
llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
### Model Serving
|
||||
### (Optional) Update Model Serving Configuration
|
||||
|
||||
Use `llama-stack-client models list` to check the available models served by together.
|
||||
|
||||
|
|
9
docs/_static/css/my_theme.css
vendored
Normal file
9
docs/_static/css/my_theme.css
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
@import url("theme.css");
|
||||
|
||||
.wy-nav-content {
|
||||
max-width: 90%;
|
||||
}
|
||||
|
||||
.wy-side-nav-search, .wy-nav-top {
|
||||
background: #666666;
|
||||
}
|
BIN
docs/_static/llama-stack.png
vendored
BIN
docs/_static/llama-stack.png
vendored
Binary file not shown.
Before Width: | Height: | Size: 71 KiB After Width: | Height: | Size: 2.3 MiB |
BIN
docs/_static/remote_or_local.gif
vendored
Normal file
BIN
docs/_static/remote_or_local.gif
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 204 KiB |
|
@ -1,270 +0,0 @@
|
|||
# Building a Llama Stack Distribution
|
||||
|
||||
This guide will walk you through the steps to get started with building a Llama Stack distributiom from scratch with your choice of API providers. Please see the [Getting Started Guide](./getting_started.md) if you just want the basic steps to start a Llama Stack distribution.
|
||||
|
||||
## Step 1. Build
|
||||
In the following steps, imagine we'll be working with a `Meta-Llama3.1-8B-Instruct` model. We will name our build `8b-instruct` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify:
|
||||
- `name`: the name for our distribution (e.g. `8b-instruct`)
|
||||
- `image_type`: our build image type (`conda | docker`)
|
||||
- `distribution_spec`: our distribution specs for specifying API providers
|
||||
- `description`: a short description of the configurations for the distribution
|
||||
- `providers`: specifies the underlying implementation for serving each API endpoint
|
||||
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
|
||||
|
||||
|
||||
At the end of build command, we will generate `<name>-build.yaml` file storing the build configurations.
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
|
||||
#### Building from scratch
|
||||
- For a new user, we could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
```
|
||||
llama stack build
|
||||
```
|
||||
|
||||
Running the command above will allow you to fill in the configuration to build your Llama Stack distribution, you will see the following outputs.
|
||||
|
||||
```
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): 8b-instruct
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-llama-stack/8b-instruct-build.yaml
|
||||
```
|
||||
|
||||
**Ollama (optional)**
|
||||
|
||||
If you plan to use Ollama for inference, you'll need to install the server [via these instructions](https://ollama.com/download).
|
||||
|
||||
|
||||
#### Building from templates
|
||||
- To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
```
|
||||
|
||||

|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
|
||||
```
|
||||
llama stack build --template tgi
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --template tgi
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-tgi/tgi-build.yaml
|
||||
You may now run `llama stack configure tgi` or `llama stack configure ~/.conda/envs/llamastack-tgi/tgi-build.yaml`
|
||||
```
|
||||
|
||||
#### Building from config file
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/distributions/templates/`.
|
||||
|
||||
```
|
||||
$ cat llama_stack/templates/ollama/build.yaml
|
||||
|
||||
name: ollama
|
||||
distribution_spec:
|
||||
description: Like local, but use ollama for running LLM inference
|
||||
providers:
|
||||
inference: remote::ollama
|
||||
memory: meta-reference
|
||||
safety: meta-reference
|
||||
agents: meta-reference
|
||||
telemetry: meta-reference
|
||||
image_type: conda
|
||||
```
|
||||
|
||||
```
|
||||
llama stack build --config llama_stack/templates/ollama/build.yaml
|
||||
```
|
||||
|
||||
#### How to build distribution with Docker image
|
||||
|
||||
> [!TIP]
|
||||
> Podman is supported as an alternative to Docker. Set `DOCKER_BINARY` to `podman` in your environment to use Podman.
|
||||
|
||||
To build a docker image, you may start off from a template and use the `--image-type docker` flag to specify `docker` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --template local --image-type docker
|
||||
```
|
||||
|
||||
Alternatively, you may use a config file and set `image_type` to `docker` in our `<name>-build.yaml` file, and run `llama stack build <name>-build.yaml`. The `<name>-build.yaml` will be of contents like:
|
||||
|
||||
```
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
memory: meta-reference-faiss
|
||||
safety: meta-reference
|
||||
agentic_system: meta-reference
|
||||
telemetry: console
|
||||
image_type: docker
|
||||
```
|
||||
|
||||
The following command allows you to build a Docker image with the name `<name>`
|
||||
```
|
||||
llama stack build --config <name>-build.yaml
|
||||
|
||||
Dockerfile created successfully in /tmp/tmp.I0ifS2c46A/DockerfileFROM python:3.10-slim
|
||||
WORKDIR /app
|
||||
...
|
||||
...
|
||||
You can run it with: podman run -p 8000:8000 llamastack-docker-local
|
||||
Build spec configuration saved at ~/.llama/distributions/docker/docker-local-build.yaml
|
||||
```
|
||||
|
||||
|
||||
## Step 2. Configure
|
||||
After our distribution is built (either in form of docker or conda environment), we will run the following command to
|
||||
```
|
||||
llama stack configure [ <docker-image-name> | <path/to/name.build.yaml>]
|
||||
```
|
||||
- For `conda` environments: <path/to/name.build.yaml> would be the generated build spec saved from Step 1.
|
||||
- For `docker` images downloaded from Dockerhub, you could also use <docker-image-name> as the argument.
|
||||
- Run `docker images` to check list of available images on your machine.
|
||||
|
||||
```
|
||||
$ llama stack configure tgi
|
||||
|
||||
Configuring API: inference (meta-reference)
|
||||
Enter value for model (existing: Meta-Llama3.1-8B-Instruct) (required):
|
||||
Enter value for quantization (optional):
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (existing: 4096) (required):
|
||||
Enter value for max_batch_size (existing: 1) (required):
|
||||
|
||||
Configuring API: memory (meta-reference-faiss)
|
||||
|
||||
Configuring API: safety (meta-reference)
|
||||
Do you want to configure llama_guard_shield? (y/n): y
|
||||
Entering sub-configuration for llama_guard_shield:
|
||||
Enter value for model (default: Llama-Guard-3-1B) (required):
|
||||
Enter value for excluded_categories (default: []) (required):
|
||||
Enter value for disable_input_check (default: False) (required):
|
||||
Enter value for disable_output_check (default: False) (required):
|
||||
Do you want to configure prompt_guard_shield? (y/n): y
|
||||
Entering sub-configuration for prompt_guard_shield:
|
||||
Enter value for model (default: Prompt-Guard-86M) (required):
|
||||
|
||||
Configuring API: agentic_system (meta-reference)
|
||||
Enter value for brave_search_api_key (optional):
|
||||
Enter value for bing_search_api_key (optional):
|
||||
Enter value for wolfram_api_key (optional):
|
||||
|
||||
Configuring API: telemetry (console)
|
||||
|
||||
YAML configuration has been written to ~/.llama/builds/conda/tgi-run.yaml
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find a run configuration spec in `~/.llama/builds/conda/tgi-run.yaml` with the following contents. You may edit this file to change the settings.
|
||||
|
||||
As you can see, we did basic configuration above and configured:
|
||||
- inference to run on model `Meta-Llama3.1-8B-Instruct` (obtained from `llama model list`)
|
||||
- Llama Guard safety shield with model `Llama-Guard-3-1B`
|
||||
- Prompt Guard safety shield with model `Prompt-Guard-86M`
|
||||
|
||||
For how these configurations are stored as yaml, checkout the file printed at the end of the configuration.
|
||||
|
||||
Note that all configurations as well as models are stored in `~/.llama`
|
||||
|
||||
|
||||
## Step 3. Run
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack configure` step.
|
||||
|
||||
```
|
||||
llama stack run 8b-instruct
|
||||
```
|
||||
|
||||
You should see the Llama Stack server start and print the APIs that it is supporting
|
||||
|
||||
```
|
||||
$ llama stack run 8b-instruct
|
||||
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
Loaded in 19.28 seconds
|
||||
NCCL version 2.20.5+cuda12.4
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/batch_chat_completion
|
||||
Serving POST /inference/batch_completion
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/memory_bank/attach
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/memory_bank/detach
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Listening on :::5000
|
||||
INFO: Started server process [453333]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Configuration is in `~/.llama/builds/local/conda/tgi-run.yaml`. Feel free to increase `max_seq_len`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The "local" distribution inference server currently only supports CUDA. It will not work on Apple Silicon machines.
|
||||
|
||||
> [!TIP]
|
||||
> You might need to use the flag `--disable-ipv6` to Disable IPv6 support
|
||||
|
||||
This server is running a Llama model locally.
|
||||
|
||||
## Step 4. Test with Client
|
||||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s /inference/chat_completion API.
|
||||
|
||||
Here is an example output:
|
||||
```
|
||||
User>hello world, write me a 2 sentence poem about the moon
|
||||
Assistant> Here's a 2-sentence poem about the moon:
|
||||
|
||||
The moon glows softly in the midnight sky,
|
||||
A beacon of wonder, as it passes by.
|
||||
```
|
||||
|
||||
Similarly you can test safety (if you configured llama-guard and/or prompt-guard shields) by:
|
||||
|
||||
```
|
||||
python -m llama_stack.apis.safety.client localhost 5000
|
||||
```
|
||||
|
||||
|
||||
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
|
@ -1,485 +0,0 @@
|
|||
# Llama CLI Reference
|
||||
|
||||
The `llama` CLI tool helps you setup and use the Llama Stack & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
### Subcommands
|
||||
1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face.
|
||||
2. `model`: Lists available models and their properties.
|
||||
3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](cli_reference.md#step-3-building-and-configuring-llama-stack-distributions).
|
||||
|
||||
### Sample Usage
|
||||
|
||||
```
|
||||
llama --help
|
||||
```
|
||||
<pre style="font-family: monospace;">
|
||||
usage: llama [-h] {download,model,stack} ...
|
||||
|
||||
Welcome to the Llama CLI
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
|
||||
subcommands:
|
||||
{download,model,stack}
|
||||
</pre>
|
||||
|
||||
## Step 1. Get the models
|
||||
|
||||
You first need to have models downloaded locally.
|
||||
|
||||
To download any model you need the **Model Descriptor**.
|
||||
This can be obtained by running the command
|
||||
```
|
||||
llama model list
|
||||
```
|
||||
|
||||
You should see a table like this:
|
||||
|
||||
<pre style="font-family: monospace;">
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Model Descriptor | Hugging Face Repo | Context Length |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B | meta-llama/Llama-3.1-70B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp8 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B | meta-llama/Llama-3.1-405B-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp16 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B-Instruct | meta-llama/Llama-3.1-8B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B-Instruct | meta-llama/Llama-3.1-70B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp8 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct | meta-llama/Llama-3.1-405B-Instruct-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp16 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B | meta-llama/Llama-3.2-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B | meta-llama/Llama-3.2-3B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision | meta-llama/Llama-3.2-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision | meta-llama/Llama-3.2-90B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B-Instruct | meta-llama/Llama-3.2-1B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B-Instruct | meta-llama/Llama-3.2-3B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision-Instruct | meta-llama/Llama-3.2-11B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision-Instruct | meta-llama/Llama-3.2-90B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-11B-Vision | meta-llama/Llama-Guard-3-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B:int4-mp1 | meta-llama/Llama-Guard-3-1B-INT4 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B | meta-llama/Llama-Guard-3-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B | meta-llama/Llama-Guard-3-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B:int8-mp1 | meta-llama/Llama-Guard-3-8B-INT8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Prompt-Guard-86M | meta-llama/Prompt-Guard-86M | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-2-8B | meta-llama/Llama-Guard-2-8B | 4K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
</pre>
|
||||
|
||||
To download models, you can use the llama download command.
|
||||
|
||||
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
|
||||
|
||||
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/)
|
||||
|
||||
Download the required checkpoints using the following commands:
|
||||
```bash
|
||||
# download the 8B model, this can be run on a single GPU
|
||||
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL
|
||||
|
||||
# you can also get the 70B model, this will require 8 GPUs however
|
||||
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL
|
||||
|
||||
# llama-agents have safety enabled by default. For this, you will need
|
||||
# safety models -- Llama-Guard and Prompt-Guard
|
||||
llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
|
||||
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL
|
||||
```
|
||||
|
||||
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
|
||||
|
||||
Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`.
|
||||
|
||||
```bash
|
||||
llama download --source huggingface --model-id Llama3.1-8B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama3.1-70B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama-Guard-3-1B --ignore-patterns *original*
|
||||
llama download --source huggingface --model-id Prompt-Guard-86M --ignore-patterns *original*
|
||||
```
|
||||
|
||||
**Important:** Set your environment variable `HF_TOKEN` or pass in `--hf-token` to the command to validate your access. You can find your token at [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
|
||||
|
||||
> **Tip:** Default for `llama download` is to run with `--ignore-patterns *.safetensors` since we use the `.pth` files in the `original` folder. For Llama Guard and Prompt Guard, however, we need safetensors. Hence, please run with `--ignore-patterns original` so that safetensors are downloaded and `.pth` files are ignored.
|
||||
|
||||
#### Downloading via Ollama
|
||||
|
||||
If you're already using ollama, we also have a supported Llama Stack distribution `local-ollama` and you can continue to use ollama for managing model downloads.
|
||||
|
||||
```
|
||||
ollama pull llama3.1:8b-instruct-fp16
|
||||
ollama pull llama3.1:70b-instruct-fp16
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Only the above two models are currently supported by Ollama.
|
||||
|
||||
|
||||
## Step 2: Understand the models
|
||||
The `llama model` command helps you explore the model’s interface.
|
||||
|
||||
### 2.1 Subcommands
|
||||
1. `download`: Download the model from different sources. (meta, huggingface)
|
||||
2. `list`: Lists all the models available for download with hardware requirements to deploy the models.
|
||||
3. `prompt-format`: Show llama model message formats.
|
||||
4. `describe`: Describes all the properties of the model.
|
||||
|
||||
### 2.2 Sample Usage
|
||||
|
||||
`llama model <subcommand> <options>`
|
||||
|
||||
```
|
||||
llama model --help
|
||||
```
|
||||
<pre style="font-family: monospace;">
|
||||
usage: llama model [-h] {download,list,prompt-format,describe} ...
|
||||
|
||||
Work with llama models
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
|
||||
model_subcommands:
|
||||
{download,list,prompt-format,describe}
|
||||
</pre>
|
||||
|
||||
You can use the describe command to know more about a model:
|
||||
```
|
||||
llama model describe -m Llama3.2-3B-Instruct
|
||||
```
|
||||
### 2.3 Describe
|
||||
|
||||
<pre style="font-family: monospace;">
|
||||
+-----------------------------+----------------------------------+
|
||||
| Model | Llama3.2-3B-Instruct |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Hugging Face ID | meta-llama/Llama-3.2-3B-Instruct |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Description | Llama 3.2 3b instruct model |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Context Length | 128K tokens |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Weights format | bf16 |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Model params.json | { |
|
||||
| | "dim": 3072, |
|
||||
| | "n_layers": 28, |
|
||||
| | "n_heads": 24, |
|
||||
| | "n_kv_heads": 8, |
|
||||
| | "vocab_size": 128256, |
|
||||
| | "ffn_dim_multiplier": 1.0, |
|
||||
| | "multiple_of": 256, |
|
||||
| | "norm_eps": 1e-05, |
|
||||
| | "rope_theta": 500000.0, |
|
||||
| | "use_scaled_rope": true |
|
||||
| | } |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Recommended sampling params | { |
|
||||
| | "strategy": "top_p", |
|
||||
| | "temperature": 1.0, |
|
||||
| | "top_p": 0.9, |
|
||||
| | "top_k": 0 |
|
||||
| | } |
|
||||
+-----------------------------+----------------------------------+
|
||||
</pre>
|
||||
### 2.4 Prompt Format
|
||||
You can even run `llama model prompt-format` see all of the templates and their tokens:
|
||||
|
||||
```
|
||||
llama model prompt-format -m Llama3.2-3B-Instruct
|
||||
```
|
||||

|
||||
|
||||
|
||||
|
||||
You will be shown a Markdown formatted description of the model interface and how prompts / messages are formatted for various scenarios.
|
||||
|
||||
**NOTE**: Outputs in terminal are color printed to show special tokens.
|
||||
|
||||
|
||||
## Step 3: Building, and Configuring Llama Stack Distributions
|
||||
|
||||
- Please see our [Getting Started](getting_started.md) guide for more details on how to build and start a Llama Stack distribution.
|
||||
|
||||
### Step 3.1 Build
|
||||
In the following steps, imagine we'll be working with a `Llama3.1-8B-Instruct` model. We will name our build `tgi` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify:
|
||||
- `name`: the name for our distribution (e.g. `tgi`)
|
||||
- `image_type`: our build image type (`conda | docker`)
|
||||
- `distribution_spec`: our distribution specs for specifying API providers
|
||||
- `description`: a short description of the configurations for the distribution
|
||||
- `providers`: specifies the underlying implementation for serving each API endpoint
|
||||
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
|
||||
|
||||
|
||||
At the end of build command, we will generate `<name>-build.yaml` file storing the build configurations.
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
|
||||
#### Building from scratch
|
||||
- For a new user, we could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
```
|
||||
llama stack build
|
||||
```
|
||||
|
||||
Running the command above will allow you to fill in the configuration to build your Llama Stack distribution, you will see the following outputs.
|
||||
|
||||
```
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): my-local-llama-stack
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-llama-stack/my-local-llama-stack-build.yaml
|
||||
```
|
||||
|
||||
#### Building from templates
|
||||
- To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
```
|
||||
|
||||

|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
|
||||
```
|
||||
llama stack build --template tgi --image-type conda
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --template tgi --image-type conda
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-tgi/tgi-build.yaml
|
||||
You may now run `llama stack configure tgi` or `llama stack configure ~/.conda/envs/llamastack-tgi/tgi-build.yaml`
|
||||
```
|
||||
|
||||
#### Building from config file
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/templates/`.
|
||||
|
||||
```
|
||||
$ cat build.yaml
|
||||
|
||||
name: ollama
|
||||
distribution_spec:
|
||||
description: Like local, but use ollama for running LLM inference
|
||||
providers:
|
||||
inference: remote::ollama
|
||||
memory: meta-reference
|
||||
safety: meta-reference
|
||||
agents: meta-reference
|
||||
telemetry: meta-reference
|
||||
image_type: conda
|
||||
```
|
||||
|
||||
```
|
||||
llama stack build --config build.yaml
|
||||
```
|
||||
|
||||
#### How to build distribution with Docker image
|
||||
|
||||
To build a docker image, you may start off from a template and use the `--image-type docker` flag to specify `docker` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --template tgi --image-type docker
|
||||
```
|
||||
|
||||
Alternatively, you may use a config file and set `image_type` to `docker` in our `<name>-build.yaml` file, and run `llama stack build <name>-build.yaml`. The `<name>-build.yaml` will be of contents like:
|
||||
|
||||
```
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
memory: meta-reference-faiss
|
||||
safety: meta-reference
|
||||
agentic_system: meta-reference
|
||||
telemetry: console
|
||||
image_type: docker
|
||||
```
|
||||
|
||||
The following command allows you to build a Docker image with the name `<name>`
|
||||
```
|
||||
llama stack build --config <name>-build.yaml
|
||||
|
||||
Dockerfile created successfully in /tmp/tmp.I0ifS2c46A/DockerfileFROM python:3.10-slim
|
||||
WORKDIR /app
|
||||
...
|
||||
...
|
||||
You can run it with: podman run -p 8000:8000 llamastack-docker-local
|
||||
Build spec configuration saved at ~/.llama/distributions/docker/docker-local-build.yaml
|
||||
```
|
||||
|
||||
|
||||
### Step 3.2 Configure
|
||||
After our distribution is built (either in form of docker or conda environment), we will run the following command to
|
||||
```
|
||||
llama stack configure [ <docker-image-name> | <path/to/name-build.yaml>]
|
||||
```
|
||||
- For `conda` environments: <path/to/name.build.yaml> would be the generated build spec saved from Step 1.
|
||||
- For `docker` images downloaded from Dockerhub, you could also use <docker-image-name> as the argument.
|
||||
- Run `docker images` to check list of available images on your machine.
|
||||
|
||||
```
|
||||
$ llama stack configure ~/.llama/distributions/conda/tgi-build.yaml
|
||||
|
||||
Configuring API: inference (meta-reference)
|
||||
Enter value for model (existing: Llama3.1-8B-Instruct) (required):
|
||||
Enter value for quantization (optional):
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (existing: 4096) (required):
|
||||
Enter value for max_batch_size (existing: 1) (required):
|
||||
|
||||
Configuring API: memory (meta-reference-faiss)
|
||||
|
||||
Configuring API: safety (meta-reference)
|
||||
Do you want to configure llama_guard_shield? (y/n): y
|
||||
Entering sub-configuration for llama_guard_shield:
|
||||
Enter value for model (default: Llama-Guard-3-1B) (required):
|
||||
Enter value for excluded_categories (default: []) (required):
|
||||
Enter value for disable_input_check (default: False) (required):
|
||||
Enter value for disable_output_check (default: False) (required):
|
||||
Do you want to configure prompt_guard_shield? (y/n): y
|
||||
Entering sub-configuration for prompt_guard_shield:
|
||||
Enter value for model (default: Prompt-Guard-86M) (required):
|
||||
|
||||
Configuring API: agentic_system (meta-reference)
|
||||
Enter value for brave_search_api_key (optional):
|
||||
Enter value for bing_search_api_key (optional):
|
||||
Enter value for wolfram_api_key (optional):
|
||||
|
||||
Configuring API: telemetry (console)
|
||||
|
||||
YAML configuration has been written to ~/.llama/builds/conda/8b-instruct-run.yaml
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find a run configuration spec in `~/.llama/builds/conda/8b-instruct-run.yaml` with the following contents. You may edit this file to change the settings.
|
||||
|
||||
As you can see, we did basic configuration above and configured:
|
||||
- inference to run on model `Llama3.1-8B-Instruct` (obtained from `llama model list`)
|
||||
- Llama Guard safety shield with model `Llama-Guard-3-1B`
|
||||
- Prompt Guard safety shield with model `Prompt-Guard-86M`
|
||||
|
||||
For how these configurations are stored as yaml, checkout the file printed at the end of the configuration.
|
||||
|
||||
Note that all configurations as well as models are stored in `~/.llama`
|
||||
|
||||
|
||||
### Step 3.3 Run
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack configure` step.
|
||||
|
||||
```
|
||||
llama stack run ~/.llama/builds/conda/tgi-run.yaml
|
||||
```
|
||||
|
||||
You should see the Llama Stack server start and print the APIs that it is supporting
|
||||
|
||||
```
|
||||
$ llama stack run ~/.llama/builds/local/conda/tgi-run.yaml
|
||||
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
Loaded in 19.28 seconds
|
||||
NCCL version 2.20.5+cuda12.4
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/batch_chat_completion
|
||||
Serving POST /inference/batch_completion
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/memory_bank/attach
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/memory_bank/detach
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Listening on :::5000
|
||||
INFO: Started server process [453333]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Configuration is in `~/.llama/builds/local/conda/tgi-run.yaml`. Feel free to increase `max_seq_len`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The "local" distribution inference server currently only supports CUDA. It will not work on Apple Silicon machines.
|
||||
|
||||
> [!TIP]
|
||||
> You might need to use the flag `--disable-ipv6` to Disable IPv6 support
|
||||
|
||||
This server is running a Llama model locally.
|
||||
|
||||
### Step 3.4 Test with Client
|
||||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s /inference/chat_completion API.
|
||||
|
||||
Here is an example output:
|
||||
```
|
||||
User>hello world, write me a 2 sentence poem about the moon
|
||||
Assistant> Here's a 2-sentence poem about the moon:
|
||||
|
||||
The moon glows softly in the midnight sky,
|
||||
A beacon of wonder, as it passes by.
|
||||
```
|
||||
|
||||
Similarly you can test safety (if you configured llama-guard and/or prompt-guard shields) by:
|
||||
|
||||
```
|
||||
python -m llama_stack.apis.safety.client localhost 5000
|
||||
```
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
|
@ -36,7 +36,7 @@
|
|||
"1. Get Docker container\n",
|
||||
"```\n",
|
||||
"$ docker login\n",
|
||||
"$ docker pull llamastack/llamastack-local-gpu\n",
|
||||
"$ docker pull llamastack/llamastack-meta-reference-gpu\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"2. pip install the llama stack client package \n",
|
||||
|
@ -61,15 +61,15 @@
|
|||
"```\n",
|
||||
"For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.\n",
|
||||
"$ export LLAMA_CHECKPOINT_DIR=~/.llama\n",
|
||||
"$ llama stack configure llamastack-local-gpu\n",
|
||||
"$ llama stack configure llamastack-meta-reference-gpu\n",
|
||||
"```\n",
|
||||
"Follow the prompts as part of configure.\n",
|
||||
"Here is a sample output \n",
|
||||
"```\n",
|
||||
"$ llama stack configure llamastack-local-gpu\n",
|
||||
"$ llama stack configure llamastack-meta-reference-gpu\n",
|
||||
"\n",
|
||||
"Could not find /home/hjshah/.conda/envs/llamastack-llamastack-local-gpu/llamastack-local-gpu-build.yaml. Trying docker image name instead...\n",
|
||||
"+ podman run --network host -it -v /home/hjshah/.llama/builds/docker:/app/builds llamastack-local-gpu llama stack configure ./llamastack-build.yaml --output-dir /app/builds\n",
|
||||
"Could not find ~/.conda/envs/llamastack-llamastack-meta-reference-gpu/llamastack-meta-reference-gpu-build.yaml. Trying docker image name instead...\n",
|
||||
"+ podman run --network host -it -v ~/.llama/builds/docker:/app/builds llamastack-meta-reference-gpu llama stack configure ./llamastack-build.yaml --output-dir /app/builds\n",
|
||||
"\n",
|
||||
"Configuring API `inference`...\n",
|
||||
"=== Configuring provider `meta-reference` for API inference...\n",
|
||||
|
|
|
@ -1,230 +0,0 @@
|
|||
# Getting Started with Llama Stack
|
||||
|
||||
This guide will walk you though the steps to get started on end-to-end flow for LlamaStack. This guide mainly focuses on getting started with building a LlamaStack distribution, and starting up a LlamaStack server. Please see our [documentations](../README.md) on what you can do with Llama Stack, and [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) on examples apps built with Llama Stack.
|
||||
|
||||
## Installation
|
||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
You have two ways to install this repository:
|
||||
|
||||
1. **Install as a package**:
|
||||
You can install the repository directly from [PyPI](https://pypi.org/project/llama-stack/) by running the following command:
|
||||
```bash
|
||||
pip install llama-stack
|
||||
```
|
||||
|
||||
2. **Install from source**:
|
||||
If you prefer to install from the source code, follow these steps:
|
||||
```bash
|
||||
mkdir -p ~/local
|
||||
cd ~/local
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
|
||||
conda create -n stack python=3.10
|
||||
conda activate stack
|
||||
|
||||
cd llama-stack
|
||||
$CONDA_PREFIX/bin/pip install -e .
|
||||
```
|
||||
|
||||
For what you can do with the Llama CLI, please refer to [CLI Reference](./cli_reference.md).
|
||||
|
||||
## Starting Up Llama Stack Server
|
||||
|
||||
You have two ways to start up Llama stack server:
|
||||
|
||||
1. **Starting up server via docker**:
|
||||
|
||||
We provide pre-built Docker image of Llama Stack distribution, which can be found in the following links in the [distributions](../distributions/) folder.
|
||||
|
||||
> [!NOTE]
|
||||
> For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.
|
||||
```
|
||||
export LLAMA_CHECKPOINT_DIR=~/.llama
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
||||
|
||||
To download llama models, use
|
||||
```
|
||||
llama download --model-id Llama3.1-8B-Instruct
|
||||
```
|
||||
|
||||
To download and start running a pre-built docker container, you may use the following commands:
|
||||
|
||||
```
|
||||
cd llama-stack/distributions/meta-reference-gpu
|
||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> Pro Tip: We may use `docker compose up` for starting up a distribution with remote providers (e.g. TGI) using [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general). You can checkout [these scripts](../distributions/) to help you get started.
|
||||
|
||||
|
||||
2. **Build->Configure->Run Llama Stack server via conda**:
|
||||
|
||||
You may also build a LlamaStack distribution from scratch, configure it, and start running the distribution. This is useful for developing on LlamaStack.
|
||||
|
||||
**`llama stack build`**
|
||||
- You'll be prompted to enter build information interactively.
|
||||
```
|
||||
llama stack build
|
||||
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): my-local-stack
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-stack/my-local-stack-build.yaml
|
||||
You can now run `llama stack configure my-local-stack`
|
||||
```
|
||||
|
||||
**`llama stack configure`**
|
||||
- Run `llama stack configure <name>` with the name you have previously defined in `build` step.
|
||||
```
|
||||
llama stack configure <name>
|
||||
```
|
||||
- You will be prompted to enter configurations for your Llama Stack
|
||||
|
||||
```
|
||||
$ llama stack configure my-local-stack
|
||||
|
||||
Configuring API `inference`...
|
||||
=== Configuring provider `meta-reference` for API inference...
|
||||
Enter value for model (default: Llama3.1-8B-Instruct) (required):
|
||||
Do you want to configure quantization? (y/n): n
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (default: 4096) (required):
|
||||
Enter value for max_batch_size (default: 1) (required):
|
||||
|
||||
Configuring API `safety`...
|
||||
=== Configuring provider `meta-reference` for API safety...
|
||||
Do you want to configure llama_guard_shield? (y/n): n
|
||||
Do you want to configure prompt_guard_shield? (y/n): n
|
||||
|
||||
Configuring API `agents`...
|
||||
=== Configuring provider `meta-reference` for API agents...
|
||||
Enter `type` for persistence_store (options: redis, sqlite, postgres) (default: sqlite):
|
||||
|
||||
Configuring SqliteKVStoreConfig:
|
||||
Enter value for namespace (optional):
|
||||
Enter value for db_path (default: /home/xiyan/.llama/runtime/kvstore.db) (required):
|
||||
|
||||
Configuring API `memory`...
|
||||
=== Configuring provider `meta-reference` for API memory...
|
||||
> Please enter the supported memory bank type your provider has for memory: vector
|
||||
|
||||
Configuring API `telemetry`...
|
||||
=== Configuring provider `meta-reference` for API telemetry...
|
||||
|
||||
> YAML configuration has been written to ~/.llama/builds/conda/my-local-stack-run.yaml.
|
||||
You can now run `llama stack run my-local-stack --port PORT`
|
||||
```
|
||||
|
||||
**`llama stack run`**
|
||||
- Run `llama stack run <name>` with the name you have previously defined.
|
||||
```
|
||||
llama stack run my-local-stack
|
||||
|
||||
...
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
...
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /inference/embeddings
|
||||
Serving POST /memory_banks/create
|
||||
Serving DELETE /memory_bank/documents/delete
|
||||
Serving DELETE /memory_banks/drop
|
||||
Serving GET /memory_bank/documents/get
|
||||
Serving GET /memory_banks/get
|
||||
Serving POST /memory_bank/insert
|
||||
Serving GET /memory_banks/list
|
||||
Serving POST /memory_bank/query
|
||||
Serving POST /memory_bank/update
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Serving GET /telemetry/get_trace
|
||||
Serving POST /telemetry/log_event
|
||||
Listening on :::5000
|
||||
INFO: Started server process [587053]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
|
||||
## Testing with client
|
||||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s `/inference/chat_completion` API.
|
||||
|
||||
Here is an example output:
|
||||
```
|
||||
User>hello world, write me a 2 sentence poem about the moon
|
||||
Assistant> Here's a 2-sentence poem about the moon:
|
||||
|
||||
The moon glows softly in the midnight sky,
|
||||
A beacon of wonder, as it passes by.
|
||||
```
|
||||
|
||||
You may also send a POST request to the server:
|
||||
```
|
||||
curl http://localhost:5000/inference/chat_completion \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "Llama3.1-8B-Instruct",
|
||||
"messages": [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Write me a 2 sentence poem about the moon"}
|
||||
],
|
||||
"sampling_params": {"temperature": 0.7, "seed": 42, "max_tokens": 512}
|
||||
}'
|
||||
|
||||
Output:
|
||||
{'completion_message': {'role': 'assistant',
|
||||
'content': 'The moon glows softly in the midnight sky, \nA beacon of wonder, as it catches the eye.',
|
||||
'stop_reason': 'out_of_tokens',
|
||||
'tool_calls': []},
|
||||
'logprobs': null}
|
||||
|
||||
```
|
||||
|
||||
|
||||
Similarly you can test safety (if you configured llama-guard and/or prompt-guard shields) by:
|
||||
|
||||
```
|
||||
python -m llama_stack.apis.safety.client localhost 5000
|
||||
```
|
||||
|
||||
|
||||
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
||||
|
||||
|
||||
## Advanced Guides
|
||||
Please see our [Building a LLama Stack Distribution](./building_distro.md) guide for more details on how to assemble your own Llama Stack Distribution.
|
|
@ -1,3 +1,9 @@
|
|||
sphinx
|
||||
myst-parser
|
||||
linkify
|
||||
-e git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme
|
||||
sphinx-rtd-theme>=1.0.0
|
||||
sphinx-pdj-theme
|
||||
sphinx-copybutton
|
||||
sphinx-tabs
|
||||
sphinx-design
|
||||
|
|
14
docs/source/api_providers/index.md
Normal file
14
docs/source/api_providers/index.md
Normal file
|
@ -0,0 +1,14 @@
|
|||
# API Providers
|
||||
|
||||
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
||||
|
||||
As an example, for Inference, we could have the implementation be backed by open source libraries like `[ torch | vLLM | TensorRT ]` as possible options.
|
||||
|
||||
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
new_api_provider
|
||||
memory_api
|
||||
```
|
53
docs/source/api_providers/memory_api.md
Normal file
53
docs/source/api_providers/memory_api.md
Normal file
|
@ -0,0 +1,53 @@
|
|||
# Memory API Providers
|
||||
|
||||
This guide gives you references to switch between different memory API providers.
|
||||
|
||||
##### pgvector
|
||||
1. Start running the pgvector server:
|
||||
|
||||
```
|
||||
$ docker run --network host --name mypostgres -it -p 5432:5432 -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgres pgvector/pgvector:pg16
|
||||
```
|
||||
|
||||
2. Edit the `run.yaml` file to point to the pgvector server.
|
||||
```
|
||||
memory:
|
||||
- provider_id: pgvector
|
||||
provider_type: remote::pgvector
|
||||
config:
|
||||
host: 127.0.0.1
|
||||
port: 5432
|
||||
db: postgres
|
||||
user: postgres
|
||||
password: mysecretpassword
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you get a `RuntimeError: Vector extension is not installed.`. You will need to run `CREATE EXTENSION IF NOT EXISTS vector;` to include the vector extension. E.g.
|
||||
|
||||
```
|
||||
docker exec -it mypostgres ./bin/psql -U postgres
|
||||
postgres=# CREATE EXTENSION IF NOT EXISTS vector;
|
||||
postgres=# SELECT extname from pg_extension;
|
||||
extname
|
||||
```
|
||||
|
||||
3. Run `docker compose up` with the updated `run.yaml` file.
|
||||
|
||||
##### chromadb
|
||||
1. Start running chromadb server
|
||||
```
|
||||
docker run -it --network host --name chromadb -p 6000:6000 -v ./chroma_vdb:/chroma/chroma -e IS_PERSISTENT=TRUE chromadb/chroma:latest
|
||||
```
|
||||
|
||||
2. Edit the `run.yaml` file to point to the chromadb server.
|
||||
```
|
||||
memory:
|
||||
- provider_id: remote::chromadb
|
||||
provider_type: remote::chromadb
|
||||
config:
|
||||
host: localhost
|
||||
port: 6000
|
||||
```
|
||||
|
||||
3. Run `docker compose up` with the updated `run.yaml` file.
|
|
@ -6,10 +6,10 @@ This guide contains references to walk you through adding a new API provider.
|
|||
1. First, decide which API your provider falls into (e.g. Inference, Safety, Agents, Memory).
|
||||
2. Decide whether your provider is a remote provider, or inline implmentation. A remote provider is a provider that makes a remote request to an service. An inline provider is a provider where implementation is executed locally. Checkout the examples, and follow the structure to add your own API provider. Please find the following code pointers:
|
||||
|
||||
- [Inference Remote Adapter](../llama_stack/providers/adapters/inference/)
|
||||
- [Inference Inline Provider](../llama_stack/providers/impls/)
|
||||
- [Inference Remote Adapter](https://github.com/meta-llama/llama-stack/tree/docs/llama_stack/providers/adapters/inference)
|
||||
- [Inference Inline Provider](https://github.com/meta-llama/llama-stack/tree/docs/llama_stack/providers/impls/meta_reference/inference)
|
||||
|
||||
3. [Build a Llama Stack distribution](./building_distro.md) with your API provider.
|
||||
3. [Build a Llama Stack distribution](https://llama-stack.readthedocs.io/en/latest/distribution_dev/building_distro.html) with your API provider.
|
||||
4. Test your code!
|
||||
|
||||
### Testing your newly added API providers
|
|
@ -1,485 +0,0 @@
|
|||
# Llama CLI Reference
|
||||
|
||||
The `llama` CLI tool helps you setup and use the Llama Stack & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
## Subcommands
|
||||
1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face.
|
||||
2. `model`: Lists available models and their properties.
|
||||
3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this in Step 3 below.
|
||||
|
||||
## Sample Usage
|
||||
|
||||
```
|
||||
llama --help
|
||||
```
|
||||
<pre style="font-family: monospace;">
|
||||
usage: llama [-h] {download,model,stack} ...
|
||||
|
||||
Welcome to the Llama CLI
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
|
||||
subcommands:
|
||||
{download,model,stack}
|
||||
</pre>
|
||||
|
||||
## Step 1. Get the models
|
||||
|
||||
You first need to have models downloaded locally.
|
||||
|
||||
To download any model you need the **Model Descriptor**.
|
||||
This can be obtained by running the command
|
||||
```
|
||||
llama model list
|
||||
```
|
||||
|
||||
You should see a table like this:
|
||||
|
||||
<pre style="font-family: monospace;">
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Model Descriptor | Hugging Face Repo | Context Length |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B | meta-llama/Llama-3.1-70B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp8 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B | meta-llama/Llama-3.1-405B-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp16 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B-Instruct | meta-llama/Llama-3.1-8B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B-Instruct | meta-llama/Llama-3.1-70B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp8 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct | meta-llama/Llama-3.1-405B-Instruct-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp16 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B | meta-llama/Llama-3.2-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B | meta-llama/Llama-3.2-3B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision | meta-llama/Llama-3.2-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision | meta-llama/Llama-3.2-90B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B-Instruct | meta-llama/Llama-3.2-1B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B-Instruct | meta-llama/Llama-3.2-3B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision-Instruct | meta-llama/Llama-3.2-11B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision-Instruct | meta-llama/Llama-3.2-90B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-11B-Vision | meta-llama/Llama-Guard-3-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B:int4-mp1 | meta-llama/Llama-Guard-3-1B-INT4 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B | meta-llama/Llama-Guard-3-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B | meta-llama/Llama-Guard-3-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B:int8-mp1 | meta-llama/Llama-Guard-3-8B-INT8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Prompt-Guard-86M | meta-llama/Prompt-Guard-86M | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-2-8B | meta-llama/Llama-Guard-2-8B | 4K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
</pre>
|
||||
|
||||
To download models, you can use the llama download command.
|
||||
|
||||
### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
|
||||
|
||||
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/)
|
||||
|
||||
Download the required checkpoints using the following commands:
|
||||
```bash
|
||||
# download the 8B model, this can be run on a single GPU
|
||||
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL
|
||||
|
||||
# you can also get the 70B model, this will require 8 GPUs however
|
||||
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL
|
||||
|
||||
# llama-agents have safety enabled by default. For this, you will need
|
||||
# safety models -- Llama-Guard and Prompt-Guard
|
||||
llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
|
||||
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL
|
||||
```
|
||||
|
||||
### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
|
||||
|
||||
Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`.
|
||||
|
||||
```bash
|
||||
llama download --source huggingface --model-id Llama3.1-8B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama3.1-70B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama-Guard-3-1B --ignore-patterns *original*
|
||||
llama download --source huggingface --model-id Prompt-Guard-86M --ignore-patterns *original*
|
||||
```
|
||||
|
||||
**Important:** Set your environment variable `HF_TOKEN` or pass in `--hf-token` to the command to validate your access. You can find your token at [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
|
||||
|
||||
> **Tip:** Default for `llama download` is to run with `--ignore-patterns *.safetensors` since we use the `.pth` files in the `original` folder. For Llama Guard and Prompt Guard, however, we need safetensors. Hence, please run with `--ignore-patterns original` so that safetensors are downloaded and `.pth` files are ignored.
|
||||
|
||||
### Downloading via Ollama
|
||||
|
||||
If you're already using ollama, we also have a supported Llama Stack distribution `local-ollama` and you can continue to use ollama for managing model downloads.
|
||||
|
||||
```
|
||||
ollama pull llama3.1:8b-instruct-fp16
|
||||
ollama pull llama3.1:70b-instruct-fp16
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Only the above two models are currently supported by Ollama.
|
||||
|
||||
|
||||
## Step 2: Understand the models
|
||||
The `llama model` command helps you explore the model’s interface.
|
||||
|
||||
### 2.1 Subcommands
|
||||
1. `download`: Download the model from different sources. (meta, huggingface)
|
||||
2. `list`: Lists all the models available for download with hardware requirements to deploy the models.
|
||||
3. `prompt-format`: Show llama model message formats.
|
||||
4. `describe`: Describes all the properties of the model.
|
||||
|
||||
### 2.2 Sample Usage
|
||||
|
||||
`llama model <subcommand> <options>`
|
||||
|
||||
```
|
||||
llama model --help
|
||||
```
|
||||
<pre style="font-family: monospace;">
|
||||
usage: llama model [-h] {download,list,prompt-format,describe} ...
|
||||
|
||||
Work with llama models
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
|
||||
model_subcommands:
|
||||
{download,list,prompt-format,describe}
|
||||
</pre>
|
||||
|
||||
You can use the describe command to know more about a model:
|
||||
```
|
||||
llama model describe -m Llama3.2-3B-Instruct
|
||||
```
|
||||
### 2.3 Describe
|
||||
|
||||
<pre style="font-family: monospace;">
|
||||
+-----------------------------+----------------------------------+
|
||||
| Model | Llama3.2-3B-Instruct |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Hugging Face ID | meta-llama/Llama-3.2-3B-Instruct |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Description | Llama 3.2 3b instruct model |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Context Length | 128K tokens |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Weights format | bf16 |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Model params.json | { |
|
||||
| | "dim": 3072, |
|
||||
| | "n_layers": 28, |
|
||||
| | "n_heads": 24, |
|
||||
| | "n_kv_heads": 8, |
|
||||
| | "vocab_size": 128256, |
|
||||
| | "ffn_dim_multiplier": 1.0, |
|
||||
| | "multiple_of": 256, |
|
||||
| | "norm_eps": 1e-05, |
|
||||
| | "rope_theta": 500000.0, |
|
||||
| | "use_scaled_rope": true |
|
||||
| | } |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Recommended sampling params | { |
|
||||
| | "strategy": "top_p", |
|
||||
| | "temperature": 1.0, |
|
||||
| | "top_p": 0.9, |
|
||||
| | "top_k": 0 |
|
||||
| | } |
|
||||
+-----------------------------+----------------------------------+
|
||||
</pre>
|
||||
### 2.4 Prompt Format
|
||||
You can even run `llama model prompt-format` see all of the templates and their tokens:
|
||||
|
||||
```
|
||||
llama model prompt-format -m Llama3.2-3B-Instruct
|
||||
```
|
||||

|
||||
|
||||
|
||||
|
||||
You will be shown a Markdown formatted description of the model interface and how prompts / messages are formatted for various scenarios.
|
||||
|
||||
**NOTE**: Outputs in terminal are color printed to show special tokens.
|
||||
|
||||
|
||||
## Step 3: Building, and Configuring Llama Stack Distributions
|
||||
|
||||
- Please see our [Getting Started](getting_started.md) guide for more details on how to build and start a Llama Stack distribution.
|
||||
|
||||
### Step 3.1 Build
|
||||
In the following steps, imagine we'll be working with a `Llama3.1-8B-Instruct` model. We will name our build `8b-instruct` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify:
|
||||
- `name`: the name for our distribution (e.g. `8b-instruct`)
|
||||
- `image_type`: our build image type (`conda | docker`)
|
||||
- `distribution_spec`: our distribution specs for specifying API providers
|
||||
- `description`: a short description of the configurations for the distribution
|
||||
- `providers`: specifies the underlying implementation for serving each API endpoint
|
||||
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
|
||||
|
||||
|
||||
At the end of build command, we will generate `<name>-build.yaml` file storing the build configurations.
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
|
||||
#### Building from scratch
|
||||
- For a new user, we could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
```
|
||||
llama stack build
|
||||
```
|
||||
|
||||
Running the command above will allow you to fill in the configuration to build your Llama Stack distribution, you will see the following outputs.
|
||||
|
||||
```
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): my-local-llama-stack
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-llama-stack/my-local-llama-stack-build.yaml
|
||||
```
|
||||
|
||||
#### Building from templates
|
||||
- To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
```
|
||||
|
||||

|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
|
||||
```
|
||||
llama stack build --template tgi
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --template tgi
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-tgi/tgi-build.yaml
|
||||
You may now run `llama stack configure tgi` or `llama stack configure ~/.conda/envs/llamastack-tgi/tgi-build.yaml`
|
||||
```
|
||||
|
||||
#### Building from config file
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/distributions/templates/`.
|
||||
|
||||
```
|
||||
$ cat llama_stack/templates/ollama/build.yaml
|
||||
|
||||
name: ollama
|
||||
distribution_spec:
|
||||
description: Like local, but use ollama for running LLM inference
|
||||
providers:
|
||||
inference: remote::ollama
|
||||
memory: meta-reference
|
||||
safety: meta-reference
|
||||
agents: meta-reference
|
||||
telemetry: meta-reference
|
||||
image_type: conda
|
||||
```
|
||||
|
||||
```
|
||||
llama stack build --config llama_stack/templates/ollama/build.yaml
|
||||
```
|
||||
|
||||
#### How to build distribution with Docker image
|
||||
|
||||
To build a docker image, you may start off from a template and use the `--image-type docker` flag to specify `docker` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --template local --image-type docker
|
||||
```
|
||||
|
||||
Alternatively, you may use a config file and set `image_type` to `docker` in our `<name>-build.yaml` file, and run `llama stack build <name>-build.yaml`. The `<name>-build.yaml` will be of contents like:
|
||||
|
||||
```
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
memory: meta-reference-faiss
|
||||
safety: meta-reference
|
||||
agentic_system: meta-reference
|
||||
telemetry: console
|
||||
image_type: docker
|
||||
```
|
||||
|
||||
The following command allows you to build a Docker image with the name `<name>`
|
||||
```
|
||||
llama stack build --config <name>-build.yaml
|
||||
|
||||
Dockerfile created successfully in /tmp/tmp.I0ifS2c46A/DockerfileFROM python:3.10-slim
|
||||
WORKDIR /app
|
||||
...
|
||||
...
|
||||
You can run it with: podman run -p 8000:8000 llamastack-docker-local
|
||||
Build spec configuration saved at ~/.llama/distributions/docker/docker-local-build.yaml
|
||||
```
|
||||
|
||||
|
||||
### Step 3.2 Configure
|
||||
After our distribution is built (either in form of docker or conda environment), we will run the following command to
|
||||
```
|
||||
llama stack configure [ <docker-image-name> | <path/to/name.build.yaml>]
|
||||
```
|
||||
- For `conda` environments: <path/to/name.build.yaml> would be the generated build spec saved from Step 1.
|
||||
- For `docker` images downloaded from Dockerhub, you could also use <docker-image-name> as the argument.
|
||||
- Run `docker images` to check list of available images on your machine.
|
||||
|
||||
```
|
||||
$ llama stack configure ~/.llama/distributions/conda/tgi-build.yaml
|
||||
|
||||
Configuring API: inference (meta-reference)
|
||||
Enter value for model (existing: Llama3.1-8B-Instruct) (required):
|
||||
Enter value for quantization (optional):
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (existing: 4096) (required):
|
||||
Enter value for max_batch_size (existing: 1) (required):
|
||||
|
||||
Configuring API: memory (meta-reference-faiss)
|
||||
|
||||
Configuring API: safety (meta-reference)
|
||||
Do you want to configure llama_guard_shield? (y/n): y
|
||||
Entering sub-configuration for llama_guard_shield:
|
||||
Enter value for model (default: Llama-Guard-3-1B) (required):
|
||||
Enter value for excluded_categories (default: []) (required):
|
||||
Enter value for disable_input_check (default: False) (required):
|
||||
Enter value for disable_output_check (default: False) (required):
|
||||
Do you want to configure prompt_guard_shield? (y/n): y
|
||||
Entering sub-configuration for prompt_guard_shield:
|
||||
Enter value for model (default: Prompt-Guard-86M) (required):
|
||||
|
||||
Configuring API: agentic_system (meta-reference)
|
||||
Enter value for brave_search_api_key (optional):
|
||||
Enter value for bing_search_api_key (optional):
|
||||
Enter value for wolfram_api_key (optional):
|
||||
|
||||
Configuring API: telemetry (console)
|
||||
|
||||
YAML configuration has been written to ~/.llama/builds/conda/tgi-run.yaml
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find a run configuration spec in `~/.llama/builds/conda/tgi-run.yaml` with the following contents. You may edit this file to change the settings.
|
||||
|
||||
As you can see, we did basic configuration above and configured:
|
||||
- inference to run on model `Llama3.1-8B-Instruct` (obtained from `llama model list`)
|
||||
- Llama Guard safety shield with model `Llama-Guard-3-1B`
|
||||
- Prompt Guard safety shield with model `Prompt-Guard-86M`
|
||||
|
||||
For how these configurations are stored as yaml, checkout the file printed at the end of the configuration.
|
||||
|
||||
Note that all configurations as well as models are stored in `~/.llama`
|
||||
|
||||
|
||||
### Step 3.3 Run
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack configure` step.
|
||||
|
||||
```
|
||||
llama stack run ~/.llama/builds/conda/tgi-run.yaml
|
||||
```
|
||||
|
||||
You should see the Llama Stack server start and print the APIs that it is supporting
|
||||
|
||||
```
|
||||
$ llama stack run ~/.llama/builds/conda/tgi-run.yaml
|
||||
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
Loaded in 19.28 seconds
|
||||
NCCL version 2.20.5+cuda12.4
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/batch_chat_completion
|
||||
Serving POST /inference/batch_completion
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/memory_bank/attach
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/memory_bank/detach
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Listening on :::5000
|
||||
INFO: Started server process [453333]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Configuration is in `~/.llama/builds/local/conda/tgi-run.yaml`. Feel free to increase `max_seq_len`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The "local" distribution inference server currently only supports CUDA. It will not work on Apple Silicon machines.
|
||||
|
||||
> [!TIP]
|
||||
> You might need to use the flag `--disable-ipv6` to Disable IPv6 support
|
||||
|
||||
This server is running a Llama model locally.
|
||||
|
||||
### Step 3.4 Test with Client
|
||||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s /inference/chat_completion API.
|
||||
|
||||
Here is an example output:
|
||||
```
|
||||
User>hello world, write me a 2 sentence poem about the moon
|
||||
Assistant> Here's a 2-sentence poem about the moon:
|
||||
|
||||
The moon glows softly in the midnight sky,
|
||||
A beacon of wonder, as it passes by.
|
||||
```
|
||||
|
||||
Similarly you can test safety (if you configured llama-guard and/or prompt-guard shields) by:
|
||||
|
||||
```
|
||||
python -m llama_stack.apis.safety.client localhost 5000
|
||||
```
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
131
docs/source/cli_reference/download_models.md
Normal file
131
docs/source/cli_reference/download_models.md
Normal file
|
@ -0,0 +1,131 @@
|
|||
# Downloading Models
|
||||
|
||||
The `llama` CLI tool helps you setup and use the Llama Stack. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
## Installation
|
||||
|
||||
You have two ways to install Llama Stack:
|
||||
|
||||
1. **Install as a package**:
|
||||
You can install the repository directly from [PyPI](https://pypi.org/project/llama-stack/) by running the following command:
|
||||
```bash
|
||||
pip install llama-stack
|
||||
```
|
||||
|
||||
2. **Install from source**:
|
||||
If you prefer to install from the source code, follow these steps:
|
||||
```bash
|
||||
mkdir -p ~/local
|
||||
cd ~/local
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
|
||||
conda create -n myenv python=3.10
|
||||
conda activate myenv
|
||||
|
||||
cd llama-stack
|
||||
$CONDA_PREFIX/bin/pip install -e .
|
||||
|
||||
## Downloading models via CLI
|
||||
|
||||
You first need to have models downloaded locally.
|
||||
|
||||
To download any model you need the **Model Descriptor**.
|
||||
This can be obtained by running the command
|
||||
```
|
||||
llama model list
|
||||
```
|
||||
|
||||
You should see a table like this:
|
||||
|
||||
```
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Model Descriptor | Hugging Face Repo | Context Length |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B | meta-llama/Llama-3.1-70B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp8 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B | meta-llama/Llama-3.1-405B-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp16 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B-Instruct | meta-llama/Llama-3.1-8B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B-Instruct | meta-llama/Llama-3.1-70B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp8 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct | meta-llama/Llama-3.1-405B-Instruct-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp16 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B | meta-llama/Llama-3.2-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B | meta-llama/Llama-3.2-3B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision | meta-llama/Llama-3.2-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision | meta-llama/Llama-3.2-90B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B-Instruct | meta-llama/Llama-3.2-1B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B-Instruct | meta-llama/Llama-3.2-3B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision-Instruct | meta-llama/Llama-3.2-11B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision-Instruct | meta-llama/Llama-3.2-90B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-11B-Vision | meta-llama/Llama-Guard-3-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B:int4-mp1 | meta-llama/Llama-Guard-3-1B-INT4 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B | meta-llama/Llama-Guard-3-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B | meta-llama/Llama-Guard-3-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B:int8-mp1 | meta-llama/Llama-Guard-3-8B-INT8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Prompt-Guard-86M | meta-llama/Prompt-Guard-86M | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-2-8B | meta-llama/Llama-Guard-2-8B | 4K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
```
|
||||
|
||||
To download models, you can use the llama download command.
|
||||
|
||||
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
|
||||
|
||||
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/)
|
||||
|
||||
Download the required checkpoints using the following commands:
|
||||
```bash
|
||||
# download the 8B model, this can be run on a single GPU
|
||||
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL
|
||||
|
||||
# you can also get the 70B model, this will require 8 GPUs however
|
||||
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL
|
||||
|
||||
# llama-agents have safety enabled by default. For this, you will need
|
||||
# safety models -- Llama-Guard and Prompt-Guard
|
||||
llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
|
||||
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL
|
||||
```
|
||||
|
||||
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
|
||||
|
||||
Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`.
|
||||
|
||||
```bash
|
||||
llama download --source huggingface --model-id Llama3.1-8B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama3.1-70B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama-Guard-3-1B --ignore-patterns *original*
|
||||
llama download --source huggingface --model-id Prompt-Guard-86M --ignore-patterns *original*
|
||||
```
|
||||
|
||||
**Important:** Set your environment variable `HF_TOKEN` or pass in `--hf-token` to the command to validate your access. You can find your token at [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
|
||||
|
||||
> **Tip:** Default for `llama download` is to run with `--ignore-patterns *.safetensors` since we use the `.pth` files in the `original` folder. For Llama Guard and Prompt Guard, however, we need safetensors. Hence, please run with `--ignore-patterns original` so that safetensors are downloaded and `.pth` files are ignored.
|
237
docs/source/cli_reference/index.md
Normal file
237
docs/source/cli_reference/index.md
Normal file
|
@ -0,0 +1,237 @@
|
|||
# CLI Reference
|
||||
|
||||
The `llama` CLI tool helps you setup and use the Llama Stack. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
## Installation
|
||||
|
||||
You have two ways to install Llama Stack:
|
||||
|
||||
1. **Install as a package**:
|
||||
You can install the repository directly from [PyPI](https://pypi.org/project/llama-stack/) by running the following command:
|
||||
```bash
|
||||
pip install llama-stack
|
||||
```
|
||||
|
||||
2. **Install from source**:
|
||||
If you prefer to install from the source code, follow these steps:
|
||||
```bash
|
||||
mkdir -p ~/local
|
||||
cd ~/local
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
|
||||
conda create -n myenv python=3.10
|
||||
conda activate myenv
|
||||
|
||||
cd llama-stack
|
||||
$CONDA_PREFIX/bin/pip install -e .
|
||||
|
||||
|
||||
## `llama` subcommands
|
||||
1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face.
|
||||
2. `model`: Lists available models and their properties.
|
||||
3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](../distribution_dev/building_distro.md).
|
||||
|
||||
### Sample Usage
|
||||
|
||||
```
|
||||
llama --help
|
||||
```
|
||||
|
||||
```
|
||||
usage: llama [-h] {download,model,stack} ...
|
||||
|
||||
Welcome to the Llama CLI
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
|
||||
subcommands:
|
||||
{download,model,stack}
|
||||
```
|
||||
|
||||
## Downloading models
|
||||
|
||||
You first need to have models downloaded locally.
|
||||
|
||||
To download any model you need the **Model Descriptor**.
|
||||
This can be obtained by running the command
|
||||
```
|
||||
llama model list
|
||||
```
|
||||
|
||||
You should see a table like this:
|
||||
|
||||
```
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Model Descriptor | Hugging Face Repo | Context Length |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B | meta-llama/Llama-3.1-70B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp8 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B | meta-llama/Llama-3.1-405B-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B:bf16-mp16 | meta-llama/Llama-3.1-405B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-8B-Instruct | meta-llama/Llama-3.1-8B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-70B-Instruct | meta-llama/Llama-3.1-70B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp8 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct | meta-llama/Llama-3.1-405B-Instruct-FP8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.1-405B-Instruct:bf16-mp16 | meta-llama/Llama-3.1-405B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B | meta-llama/Llama-3.2-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B | meta-llama/Llama-3.2-3B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision | meta-llama/Llama-3.2-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision | meta-llama/Llama-3.2-90B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-1B-Instruct | meta-llama/Llama-3.2-1B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-3B-Instruct | meta-llama/Llama-3.2-3B-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-11B-Vision-Instruct | meta-llama/Llama-3.2-11B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama3.2-90B-Vision-Instruct | meta-llama/Llama-3.2-90B-Vision-Instruct | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-11B-Vision | meta-llama/Llama-Guard-3-11B-Vision | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B:int4-mp1 | meta-llama/Llama-Guard-3-1B-INT4 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-1B | meta-llama/Llama-Guard-3-1B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B | meta-llama/Llama-Guard-3-8B | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-3-8B:int8-mp1 | meta-llama/Llama-Guard-3-8B-INT8 | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Prompt-Guard-86M | meta-llama/Prompt-Guard-86M | 128K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
| Llama-Guard-2-8B | meta-llama/Llama-Guard-2-8B | 4K |
|
||||
+----------------------------------+------------------------------------------+----------------+
|
||||
```
|
||||
|
||||
To download models, you can use the llama download command.
|
||||
|
||||
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
|
||||
|
||||
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/)
|
||||
|
||||
Download the required checkpoints using the following commands:
|
||||
```bash
|
||||
# download the 8B model, this can be run on a single GPU
|
||||
llama download --source meta --model-id Llama3.2-3B-Instruct --meta-url META_URL
|
||||
|
||||
# you can also get the 70B model, this will require 8 GPUs however
|
||||
llama download --source meta --model-id Llama3.2-11B-Vision-Instruct --meta-url META_URL
|
||||
|
||||
# llama-agents have safety enabled by default. For this, you will need
|
||||
# safety models -- Llama-Guard and Prompt-Guard
|
||||
llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
|
||||
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL
|
||||
```
|
||||
|
||||
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
|
||||
|
||||
Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`.
|
||||
|
||||
```bash
|
||||
llama download --source huggingface --model-id Llama3.1-8B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama3.1-70B-Instruct --hf-token <HF_TOKEN>
|
||||
|
||||
llama download --source huggingface --model-id Llama-Guard-3-1B --ignore-patterns *original*
|
||||
llama download --source huggingface --model-id Prompt-Guard-86M --ignore-patterns *original*
|
||||
```
|
||||
|
||||
**Important:** Set your environment variable `HF_TOKEN` or pass in `--hf-token` to the command to validate your access. You can find your token at [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
|
||||
|
||||
> **Tip:** Default for `llama download` is to run with `--ignore-patterns *.safetensors` since we use the `.pth` files in the `original` folder. For Llama Guard and Prompt Guard, however, we need safetensors. Hence, please run with `--ignore-patterns original` so that safetensors are downloaded and `.pth` files are ignored.
|
||||
|
||||
|
||||
## Understand the models
|
||||
The `llama model` command helps you explore the model’s interface.
|
||||
|
||||
1. `download`: Download the model from different sources. (meta, huggingface)
|
||||
2. `list`: Lists all the models available for download with hardware requirements to deploy the models.
|
||||
3. `prompt-format`: Show llama model message formats.
|
||||
4. `describe`: Describes all the properties of the model.
|
||||
|
||||
### Sample Usage
|
||||
|
||||
`llama model <subcommand> <options>`
|
||||
|
||||
```
|
||||
llama model --help
|
||||
```
|
||||
```
|
||||
usage: llama model [-h] {download,list,prompt-format,describe} ...
|
||||
|
||||
Work with llama models
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
|
||||
model_subcommands:
|
||||
{download,list,prompt-format,describe}
|
||||
```
|
||||
|
||||
You can use the describe command to know more about a model:
|
||||
```
|
||||
llama model describe -m Llama3.2-3B-Instruct
|
||||
```
|
||||
### Describe
|
||||
|
||||
```
|
||||
+-----------------------------+----------------------------------+
|
||||
| Model | Llama3.2-3B-Instruct |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Hugging Face ID | meta-llama/Llama-3.2-3B-Instruct |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Description | Llama 3.2 3b instruct model |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Context Length | 128K tokens |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Weights format | bf16 |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Model params.json | { |
|
||||
| | "dim": 3072, |
|
||||
| | "n_layers": 28, |
|
||||
| | "n_heads": 24, |
|
||||
| | "n_kv_heads": 8, |
|
||||
| | "vocab_size": 128256, |
|
||||
| | "ffn_dim_multiplier": 1.0, |
|
||||
| | "multiple_of": 256, |
|
||||
| | "norm_eps": 1e-05, |
|
||||
| | "rope_theta": 500000.0, |
|
||||
| | "use_scaled_rope": true |
|
||||
| | } |
|
||||
+-----------------------------+----------------------------------+
|
||||
| Recommended sampling params | { |
|
||||
| | "strategy": "top_p", |
|
||||
| | "temperature": 1.0, |
|
||||
| | "top_p": 0.9, |
|
||||
| | "top_k": 0 |
|
||||
| | } |
|
||||
+-----------------------------+----------------------------------+
|
||||
```
|
||||
|
||||
### Prompt Format
|
||||
You can even run `llama model prompt-format` see all of the templates and their tokens:
|
||||
|
||||
```
|
||||
llama model prompt-format -m Llama3.2-3B-Instruct
|
||||
```
|
||||

|
||||
|
||||
|
||||
|
||||
You will be shown a Markdown formatted description of the model interface and how prompts / messages are formatted for various scenarios.
|
||||
|
||||
**NOTE**: Outputs in terminal are color printed to show special tokens.
|
|
@ -19,7 +19,23 @@ author = "Meta"
|
|||
# -- General configuration ---------------------------------------------------
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
|
||||
|
||||
extensions = ["myst_parser"]
|
||||
extensions = [
|
||||
"myst_parser",
|
||||
"sphinx_rtd_theme",
|
||||
"sphinx_copybutton",
|
||||
"sphinx_tabs.tabs",
|
||||
"sphinx_design",
|
||||
]
|
||||
myst_enable_extensions = ["colon_fence"]
|
||||
|
||||
html_theme = "sphinx_rtd_theme"
|
||||
|
||||
# html_theme = "sphinx_pdj_theme"
|
||||
# html_theme_path = [sphinx_pdj_theme.get_html_theme_path()]
|
||||
|
||||
# html_theme = "pytorch_sphinx_theme"
|
||||
# html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
|
||||
|
||||
|
||||
templates_path = ["_templates"]
|
||||
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
|
||||
|
@ -41,13 +57,28 @@ myst_enable_extensions = [
|
|||
"tasklist",
|
||||
]
|
||||
|
||||
# Copy button settings
|
||||
copybutton_prompt_text = "$ " # for bash prompts
|
||||
copybutton_prompt_is_regexp = True
|
||||
copybutton_remove_prompts = True
|
||||
copybutton_line_continuation_character = "\\"
|
||||
|
||||
# Source suffix
|
||||
source_suffix = {
|
||||
".rst": "restructuredtext",
|
||||
".md": "markdown",
|
||||
}
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
|
||||
|
||||
html_theme = "alabaster"
|
||||
# html_theme = "alabaster"
|
||||
html_theme_options = {
|
||||
"canonical_url": "https://github.com/meta-llama/llama-stack",
|
||||
# "style_nav_header_background": "#c3c9d4",
|
||||
}
|
||||
|
||||
html_static_path = ["../_static"]
|
||||
html_logo = "../_static/llama-stack-logo.png"
|
||||
|
||||
html_style = "../_static/css/my_theme.css"
|
||||
|
|
357
docs/source/distribution_dev/building_distro.md
Normal file
357
docs/source/distribution_dev/building_distro.md
Normal file
|
@ -0,0 +1,357 @@
|
|||
# Developer Guide: Assemble a Llama Stack Distribution
|
||||
|
||||
> NOTE: This doc may be out-of-date.
|
||||
|
||||
This guide will walk you through the steps to get started with building a Llama Stack distributiom from scratch with your choice of API providers. Please see the [Getting Started Guide](./getting_started.md) if you just want the basic steps to start a Llama Stack distribution.
|
||||
|
||||
## Step 1. Build
|
||||
In the following steps, imagine we'll be working with a `Meta-Llama3.1-8B-Instruct` model. We will name our build `8b-instruct` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify:
|
||||
- `name`: the name for our distribution (e.g. `8b-instruct`)
|
||||
- `image_type`: our build image type (`conda | docker`)
|
||||
- `distribution_spec`: our distribution specs for specifying API providers
|
||||
- `description`: a short description of the configurations for the distribution
|
||||
- `providers`: specifies the underlying implementation for serving each API endpoint
|
||||
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
|
||||
|
||||
|
||||
At the end of build command, we will generate `<name>-build.yaml` file storing the build configurations.
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
|
||||
#### Building from scratch
|
||||
- For a new user, we could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
```
|
||||
llama stack build
|
||||
```
|
||||
|
||||
Running the command above will allow you to fill in the configuration to build your Llama Stack distribution, you will see the following outputs.
|
||||
|
||||
```
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): 8b-instruct
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-llama-stack/8b-instruct-build.yaml
|
||||
```
|
||||
|
||||
**Ollama (optional)**
|
||||
|
||||
If you plan to use Ollama for inference, you'll need to install the server [via these instructions](https://ollama.com/download).
|
||||
|
||||
|
||||
#### Building from templates
|
||||
- To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
```
|
||||
|
||||
```
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| Template Name | Providers | Description |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| bedrock | { | Use Amazon Bedrock APIs. |
|
||||
| | "inference": "remote::bedrock", | |
|
||||
| | "memory": "meta-reference", | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| databricks | { | Use Databricks for running LLM inference |
|
||||
| | "inference": "remote::databricks", | |
|
||||
| | "memory": "meta-reference", | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| fireworks | { | Use Fireworks.ai for running LLM inference |
|
||||
| | "inference": "remote::fireworks", | |
|
||||
| | "memory": [ | |
|
||||
| | "meta-reference", | |
|
||||
| | "remote::weaviate", | |
|
||||
| | "remote::chromadb", | |
|
||||
| | "remote::pgvector" | |
|
||||
| | ], | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| hf-endpoint | { | Like local, but use Hugging Face Inference Endpoints for running LLM inference. |
|
||||
| | "inference": "remote::hf::endpoint", | See https://hf.co/docs/api-endpoints. |
|
||||
| | "memory": "meta-reference", | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| hf-serverless | { | Like local, but use Hugging Face Inference API (serverless) for running LLM |
|
||||
| | "inference": "remote::hf::serverless", | inference. |
|
||||
| | "memory": "meta-reference", | See https://hf.co/docs/api-inference. |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| meta-reference-gpu | { | Use code from `llama_stack` itself to serve all llama stack APIs |
|
||||
| | "inference": "meta-reference", | |
|
||||
| | "memory": [ | |
|
||||
| | "meta-reference", | |
|
||||
| | "remote::chromadb", | |
|
||||
| | "remote::pgvector" | |
|
||||
| | ], | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| meta-reference-quantized-gpu | { | Use code from `llama_stack` itself to serve all llama stack APIs |
|
||||
| | "inference": "meta-reference-quantized", | |
|
||||
| | "memory": [ | |
|
||||
| | "meta-reference", | |
|
||||
| | "remote::chromadb", | |
|
||||
| | "remote::pgvector" | |
|
||||
| | ], | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| ollama | { | Use ollama for running LLM inference |
|
||||
| | "inference": "remote::ollama", | |
|
||||
| | "memory": [ | |
|
||||
| | "meta-reference", | |
|
||||
| | "remote::chromadb", | |
|
||||
| | "remote::pgvector" | |
|
||||
| | ], | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| tgi | { | Use TGI for running LLM inference |
|
||||
| | "inference": "remote::tgi", | |
|
||||
| | "memory": [ | |
|
||||
| | "meta-reference", | |
|
||||
| | "remote::chromadb", | |
|
||||
| | "remote::pgvector" | |
|
||||
| | ], | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| together | { | Use Together.ai for running LLM inference |
|
||||
| | "inference": "remote::together", | |
|
||||
| | "memory": [ | |
|
||||
| | "meta-reference", | |
|
||||
| | "remote::weaviate" | |
|
||||
| | ], | |
|
||||
| | "safety": "remote::together", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
| vllm | { | Like local, but use vLLM for running LLM inference |
|
||||
| | "inference": "vllm", | |
|
||||
| | "memory": "meta-reference", | |
|
||||
| | "safety": "meta-reference", | |
|
||||
| | "agents": "meta-reference", | |
|
||||
| | "telemetry": "meta-reference" | |
|
||||
| | } | |
|
||||
+------------------------------+--------------------------------------------+----------------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
|
||||
```
|
||||
llama stack build --template tgi
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --template tgi
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-tgi/tgi-build.yaml
|
||||
You may now run `llama stack configure tgi` or `llama stack configure ~/.conda/envs/llamastack-tgi/tgi-build.yaml`
|
||||
```
|
||||
|
||||
#### Building from config file
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/distributions/templates/`.
|
||||
|
||||
```
|
||||
$ cat llama_stack/templates/ollama/build.yaml
|
||||
|
||||
name: ollama
|
||||
distribution_spec:
|
||||
description: Like local, but use ollama for running LLM inference
|
||||
providers:
|
||||
inference: remote::ollama
|
||||
memory: meta-reference
|
||||
safety: meta-reference
|
||||
agents: meta-reference
|
||||
telemetry: meta-reference
|
||||
image_type: conda
|
||||
```
|
||||
|
||||
```
|
||||
llama stack build --config llama_stack/templates/ollama/build.yaml
|
||||
```
|
||||
|
||||
#### How to build distribution with Docker image
|
||||
|
||||
> [!TIP]
|
||||
> Podman is supported as an alternative to Docker. Set `DOCKER_BINARY` to `podman` in your environment to use Podman.
|
||||
|
||||
To build a docker image, you may start off from a template and use the `--image-type docker` flag to specify `docker` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --template local --image-type docker
|
||||
```
|
||||
|
||||
Alternatively, you may use a config file and set `image_type` to `docker` in our `<name>-build.yaml` file, and run `llama stack build <name>-build.yaml`. The `<name>-build.yaml` will be of contents like:
|
||||
|
||||
```
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
memory: meta-reference-faiss
|
||||
safety: meta-reference
|
||||
agentic_system: meta-reference
|
||||
telemetry: console
|
||||
image_type: docker
|
||||
```
|
||||
|
||||
The following command allows you to build a Docker image with the name `<name>`
|
||||
```
|
||||
llama stack build --config <name>-build.yaml
|
||||
|
||||
Dockerfile created successfully in /tmp/tmp.I0ifS2c46A/DockerfileFROM python:3.10-slim
|
||||
WORKDIR /app
|
||||
...
|
||||
...
|
||||
You can run it with: podman run -p 8000:8000 llamastack-docker-local
|
||||
Build spec configuration saved at ~/.llama/distributions/docker/docker-local-build.yaml
|
||||
```
|
||||
|
||||
|
||||
## Step 2. Configure
|
||||
After our distribution is built (either in form of docker or conda environment), we will run the following command to
|
||||
```
|
||||
llama stack configure [ <docker-image-name> | <path/to/name.build.yaml>]
|
||||
```
|
||||
- For `conda` environments: <path/to/name.build.yaml> would be the generated build spec saved from Step 1.
|
||||
- For `docker` images downloaded from Dockerhub, you could also use <docker-image-name> as the argument.
|
||||
- Run `docker images` to check list of available images on your machine.
|
||||
|
||||
```
|
||||
$ llama stack configure tgi
|
||||
|
||||
Configuring API: inference (meta-reference)
|
||||
Enter value for model (existing: Meta-Llama3.1-8B-Instruct) (required):
|
||||
Enter value for quantization (optional):
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (existing: 4096) (required):
|
||||
Enter value for max_batch_size (existing: 1) (required):
|
||||
|
||||
Configuring API: memory (meta-reference-faiss)
|
||||
|
||||
Configuring API: safety (meta-reference)
|
||||
Do you want to configure llama_guard_shield? (y/n): y
|
||||
Entering sub-configuration for llama_guard_shield:
|
||||
Enter value for model (default: Llama-Guard-3-1B) (required):
|
||||
Enter value for excluded_categories (default: []) (required):
|
||||
Enter value for disable_input_check (default: False) (required):
|
||||
Enter value for disable_output_check (default: False) (required):
|
||||
Do you want to configure prompt_guard_shield? (y/n): y
|
||||
Entering sub-configuration for prompt_guard_shield:
|
||||
Enter value for model (default: Prompt-Guard-86M) (required):
|
||||
|
||||
Configuring API: agentic_system (meta-reference)
|
||||
Enter value for brave_search_api_key (optional):
|
||||
Enter value for bing_search_api_key (optional):
|
||||
Enter value for wolfram_api_key (optional):
|
||||
|
||||
Configuring API: telemetry (console)
|
||||
|
||||
YAML configuration has been written to ~/.llama/builds/conda/tgi-run.yaml
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find a run configuration spec in `~/.llama/builds/conda/tgi-run.yaml` with the following contents. You may edit this file to change the settings.
|
||||
|
||||
As you can see, we did basic configuration above and configured:
|
||||
- inference to run on model `Meta-Llama3.1-8B-Instruct` (obtained from `llama model list`)
|
||||
- Llama Guard safety shield with model `Llama-Guard-3-1B`
|
||||
- Prompt Guard safety shield with model `Prompt-Guard-86M`
|
||||
|
||||
For how these configurations are stored as yaml, checkout the file printed at the end of the configuration.
|
||||
|
||||
Note that all configurations as well as models are stored in `~/.llama`
|
||||
|
||||
|
||||
## Step 3. Run
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack configure` step.
|
||||
|
||||
```
|
||||
llama stack run 8b-instruct
|
||||
```
|
||||
|
||||
You should see the Llama Stack server start and print the APIs that it is supporting
|
||||
|
||||
```
|
||||
$ llama stack run 8b-instruct
|
||||
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
Loaded in 19.28 seconds
|
||||
NCCL version 2.20.5+cuda12.4
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/batch_chat_completion
|
||||
Serving POST /inference/batch_completion
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/memory_bank/attach
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/memory_bank/detach
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Listening on :::5000
|
||||
INFO: Started server process [453333]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Configuration is in `~/.llama/builds/local/conda/tgi-run.yaml`. Feel free to increase `max_seq_len`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The "local" distribution inference server currently only supports CUDA. It will not work on Apple Silicon machines.
|
||||
|
||||
> [!TIP]
|
||||
> You might need to use the flag `--disable-ipv6` to Disable IPv6 support
|
||||
|
||||
This server is running a Llama model locally.
|
20
docs/source/distribution_dev/index.md
Normal file
20
docs/source/distribution_dev/index.md
Normal file
|
@ -0,0 +1,20 @@
|
|||
# Developer Guide
|
||||
|
||||
```{toctree}
|
||||
:hidden:
|
||||
:maxdepth: 1
|
||||
|
||||
building_distro
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### API Provider
|
||||
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
||||
|
||||
As an example, for Inference, we could have the implementation be backed by open source libraries like `[ torch | vLLM | TensorRT ]` as possible options.
|
||||
|
||||
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
|
||||
|
||||
### Distribution
|
||||
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
|
@ -1,429 +0,0 @@
|
|||
# Getting Started
|
||||
|
||||
This guide will walk you though the steps to get started on end-to-end flow for LlamaStack. This guide mainly focuses on getting started with building a LlamaStack distribution, and starting up a LlamaStack server. Please see our [documentations](https://github.com/meta-llama/llama-stack/README.md) on what you can do with Llama Stack, and [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main) on examples apps built with Llama Stack.
|
||||
|
||||
## Installation
|
||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
You can install this repository as a [package](https://pypi.org/project/llama-stack/) with `pip install llama-stack`
|
||||
|
||||
If you want to install from source:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/local
|
||||
cd ~/local
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
|
||||
conda create -n stack python=3.10
|
||||
conda activate stack
|
||||
|
||||
cd llama-stack
|
||||
$CONDA_PREFIX/bin/pip install -e .
|
||||
```
|
||||
|
||||
For what you can do with the Llama CLI, please refer to [CLI Reference](./cli_reference.md).
|
||||
|
||||
## Quick Starting Llama Stack Server
|
||||
|
||||
### Starting up server via docker
|
||||
|
||||
We provide 2 pre-built Docker image of Llama Stack distribution, which can be found in the following links.
|
||||
- [llamastack-local-gpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-gpu/general)
|
||||
- This is a packaged version with our local meta-reference implementations, where you will be running inference locally with downloaded Llama model checkpoints.
|
||||
- [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general)
|
||||
- This is a lite version with remote inference where you can hook up to your favourite remote inference framework (e.g. ollama, fireworks, together, tgi) for running inference without GPU.
|
||||
|
||||
> [!NOTE]
|
||||
> For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.
|
||||
```
|
||||
export LLAMA_CHECKPOINT_DIR=~/.llama
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
||||
|
||||
|
||||
To download and start running a pre-built docker container, you may use the following commands:
|
||||
|
||||
```
|
||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama --gpus=all llamastack/llamastack-local-gpu
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> Pro Tip: We may use `docker compose up` for starting up a distribution with remote providers (e.g. TGI) using [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general). You can checkout [these scripts](https://github.com/meta-llama/llama-stack/llama_stack/distribution/docker/README.md) to help you get started.
|
||||
|
||||
### Build->Configure->Run Llama Stack server via conda
|
||||
You may also build a LlamaStack distribution from scratch, configure it, and start running the distribution. This is useful for developing on LlamaStack.
|
||||
|
||||
**`llama stack build`**
|
||||
- You'll be prompted to enter build information interactively.
|
||||
```
|
||||
llama stack build
|
||||
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): my-local-stack
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-stack/my-local-stack-build.yaml
|
||||
You can now run `llama stack configure my-local-stack`
|
||||
```
|
||||
|
||||
**`llama stack configure`**
|
||||
- Run `llama stack configure <name>` with the name you have previously defined in `build` step.
|
||||
```
|
||||
llama stack configure <name>
|
||||
```
|
||||
- You will be prompted to enter configurations for your Llama Stack
|
||||
|
||||
```
|
||||
$ llama stack configure my-local-stack
|
||||
|
||||
Configuring API `inference`...
|
||||
=== Configuring provider `meta-reference` for API inference...
|
||||
Enter value for model (default: Llama3.1-8B-Instruct) (required):
|
||||
Do you want to configure quantization? (y/n): n
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (default: 4096) (required):
|
||||
Enter value for max_batch_size (default: 1) (required):
|
||||
|
||||
Configuring API `safety`...
|
||||
=== Configuring provider `meta-reference` for API safety...
|
||||
Do you want to configure llama_guard_shield? (y/n): n
|
||||
Do you want to configure prompt_guard_shield? (y/n): n
|
||||
|
||||
Configuring API `agents`...
|
||||
=== Configuring provider `meta-reference` for API agents...
|
||||
Enter `type` for persistence_store (options: redis, sqlite, postgres) (default: sqlite):
|
||||
|
||||
Configuring SqliteKVStoreConfig:
|
||||
Enter value for namespace (optional):
|
||||
Enter value for db_path (default: /home/xiyan/.llama/runtime/kvstore.db) (required):
|
||||
|
||||
Configuring API `memory`...
|
||||
=== Configuring provider `meta-reference` for API memory...
|
||||
> Please enter the supported memory bank type your provider has for memory: vector
|
||||
|
||||
Configuring API `telemetry`...
|
||||
=== Configuring provider `meta-reference` for API telemetry...
|
||||
|
||||
> YAML configuration has been written to ~/.llama/builds/conda/my-local-stack-run.yaml.
|
||||
You can now run `llama stack run my-local-stack --port PORT`
|
||||
```
|
||||
|
||||
**`llama stack run`**
|
||||
- Run `llama stack run <name>` with the name you have previously defined.
|
||||
```
|
||||
llama stack run my-local-stack
|
||||
|
||||
...
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
...
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /inference/embeddings
|
||||
Serving POST /memory_banks/create
|
||||
Serving DELETE /memory_bank/documents/delete
|
||||
Serving DELETE /memory_banks/drop
|
||||
Serving GET /memory_bank/documents/get
|
||||
Serving GET /memory_banks/get
|
||||
Serving POST /memory_bank/insert
|
||||
Serving GET /memory_banks/list
|
||||
Serving POST /memory_bank/query
|
||||
Serving POST /memory_bank/update
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Serving GET /telemetry/get_trace
|
||||
Serving POST /telemetry/log_event
|
||||
Listening on :::5000
|
||||
INFO: Started server process [587053]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
### End-to-end flow of building, configuring, running, and testing a Distribution
|
||||
|
||||
#### Step 1. Build
|
||||
In the following steps, imagine we'll be working with a `Meta-Llama3.1-8B-Instruct` model. We will name our build `8b-instruct` to help us remember the config. We will start build our distribution (in the form of a Conda environment, or Docker image). In this step, we will specify:
|
||||
- `name`: the name for our distribution (e.g. `8b-instruct`)
|
||||
- `image_type`: our build image type (`conda | docker`)
|
||||
- `distribution_spec`: our distribution specs for specifying API providers
|
||||
- `description`: a short description of the configurations for the distribution
|
||||
- `providers`: specifies the underlying implementation for serving each API endpoint
|
||||
- `image_type`: `conda` | `docker` to specify whether to build the distribution in the form of Docker image or Conda environment.
|
||||
|
||||
|
||||
At the end of build command, we will generate `<name>-build.yaml` file storing the build configurations.
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
|
||||
#### Building from scratch
|
||||
- For a new user, we could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
```
|
||||
llama stack build
|
||||
```
|
||||
|
||||
Running the command above will allow you to fill in the configuration to build your Llama Stack distribution, you will see the following outputs.
|
||||
|
||||
```
|
||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): 8b-instruct
|
||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-llama-stack/8b-instruct-build.yaml
|
||||
```
|
||||
|
||||
**Ollama (optional)**
|
||||
|
||||
If you plan to use Ollama for inference, you'll need to install the server [via these instructions](https://ollama.com/download).
|
||||
|
||||
|
||||
#### Building from templates
|
||||
- To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
```
|
||||
|
||||

|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
|
||||
```
|
||||
llama stack build --template tgi
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --template tgi
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.conda/envs/llamastack-tgi/tgi-build.yaml
|
||||
You may now run `llama stack configure tgi` or `llama stack configure ~/.conda/envs/llamastack-tgi/tgi-build.yaml`
|
||||
```
|
||||
|
||||
#### Building from config file
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/distributions/templates/`.
|
||||
|
||||
```
|
||||
$ cat llama_stack/templates/ollama/build.yaml
|
||||
|
||||
name: ollama
|
||||
distribution_spec:
|
||||
description: Like local, but use ollama for running LLM inference
|
||||
providers:
|
||||
inference: remote::ollama
|
||||
memory: meta-reference
|
||||
safety: meta-reference
|
||||
agents: meta-reference
|
||||
telemetry: meta-reference
|
||||
image_type: conda
|
||||
```
|
||||
|
||||
```
|
||||
llama stack build --config llama_stack/templates/ollama/build.yaml
|
||||
```
|
||||
|
||||
#### How to build distribution with Docker image
|
||||
|
||||
> [!TIP]
|
||||
> Podman is supported as an alternative to Docker. Set `DOCKER_BINARY` to `podman` in your environment to use Podman.
|
||||
|
||||
To build a docker image, you may start off from a template and use the `--image-type docker` flag to specify `docker` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --template tgi --image-type docker
|
||||
```
|
||||
|
||||
Alternatively, you may use a config file and set `image_type` to `docker` in our `<name>-build.yaml` file, and run `llama stack build <name>-build.yaml`. The `<name>-build.yaml` will be of contents like:
|
||||
|
||||
```
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
memory: meta-reference-faiss
|
||||
safety: meta-reference
|
||||
agentic_system: meta-reference
|
||||
telemetry: console
|
||||
image_type: docker
|
||||
```
|
||||
|
||||
The following command allows you to build a Docker image with the name `<name>`
|
||||
```
|
||||
llama stack build --config <name>-build.yaml
|
||||
|
||||
Dockerfile created successfully in /tmp/tmp.I0ifS2c46A/DockerfileFROM python:3.10-slim
|
||||
WORKDIR /app
|
||||
...
|
||||
...
|
||||
You can run it with: podman run -p 8000:8000 llamastack-docker-local
|
||||
Build spec configuration saved at ~/.llama/distributions/docker/docker-local-build.yaml
|
||||
```
|
||||
|
||||
|
||||
### Step 2. Configure
|
||||
After our distribution is built (either in form of docker or conda environment), we will run the following command to
|
||||
```
|
||||
llama stack configure [ <docker-image-name> | <path/to/name.build.yaml>]
|
||||
```
|
||||
- For `conda` environments: <path/to/name.build.yaml> would be the generated build spec saved from Step 1.
|
||||
- For `docker` images downloaded from Dockerhub, you could also use <docker-image-name> as the argument.
|
||||
- Run `docker images` to check list of available images on your machine.
|
||||
|
||||
```
|
||||
$ llama stack configure tgi
|
||||
|
||||
Configuring API: inference (meta-reference)
|
||||
Enter value for model (existing: Meta-Llama3.1-8B-Instruct) (required):
|
||||
Enter value for quantization (optional):
|
||||
Enter value for torch_seed (optional):
|
||||
Enter value for max_seq_len (existing: 4096) (required):
|
||||
Enter value for max_batch_size (existing: 1) (required):
|
||||
|
||||
Configuring API: memory (meta-reference-faiss)
|
||||
|
||||
Configuring API: safety (meta-reference)
|
||||
Do you want to configure llama_guard_shield? (y/n): y
|
||||
Entering sub-configuration for llama_guard_shield:
|
||||
Enter value for model (default: Llama-Guard-3-1B) (required):
|
||||
Enter value for excluded_categories (default: []) (required):
|
||||
Enter value for disable_input_check (default: False) (required):
|
||||
Enter value for disable_output_check (default: False) (required):
|
||||
Do you want to configure prompt_guard_shield? (y/n): y
|
||||
Entering sub-configuration for prompt_guard_shield:
|
||||
Enter value for model (default: Prompt-Guard-86M) (required):
|
||||
|
||||
Configuring API: agentic_system (meta-reference)
|
||||
Enter value for brave_search_api_key (optional):
|
||||
Enter value for bing_search_api_key (optional):
|
||||
Enter value for wolfram_api_key (optional):
|
||||
|
||||
Configuring API: telemetry (console)
|
||||
|
||||
YAML configuration has been written to ~/.llama/builds/conda/tgi-run.yaml
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find a run configuration spec in `~/.llama/builds/conda/tgi-run.yaml` with the following contents. You may edit this file to change the settings.
|
||||
|
||||
As you can see, we did basic configuration above and configured:
|
||||
- inference to run on model `Meta-Llama3.1-8B-Instruct` (obtained from `llama model list`)
|
||||
- Llama Guard safety shield with model `Llama-Guard-3-1B`
|
||||
- Prompt Guard safety shield with model `Prompt-Guard-86M`
|
||||
|
||||
For how these configurations are stored as yaml, checkout the file printed at the end of the configuration.
|
||||
|
||||
Note that all configurations as well as models are stored in `~/.llama`
|
||||
|
||||
|
||||
### Step 3. Run
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack configure` step.
|
||||
|
||||
```
|
||||
llama stack run tgi
|
||||
```
|
||||
|
||||
You should see the Llama Stack server start and print the APIs that it is supporting
|
||||
|
||||
```
|
||||
$ llama stack run tgi
|
||||
|
||||
> initializing model parallel with size 1
|
||||
> initializing ddp with size 1
|
||||
> initializing pipeline with size 1
|
||||
Loaded in 19.28 seconds
|
||||
NCCL version 2.20.5+cuda12.4
|
||||
Finished model load YES READY
|
||||
Serving POST /inference/batch_chat_completion
|
||||
Serving POST /inference/batch_completion
|
||||
Serving POST /inference/chat_completion
|
||||
Serving POST /inference/completion
|
||||
Serving POST /safety/run_shield
|
||||
Serving POST /agentic_system/memory_bank/attach
|
||||
Serving POST /agentic_system/create
|
||||
Serving POST /agentic_system/session/create
|
||||
Serving POST /agentic_system/turn/create
|
||||
Serving POST /agentic_system/delete
|
||||
Serving POST /agentic_system/session/delete
|
||||
Serving POST /agentic_system/memory_bank/detach
|
||||
Serving POST /agentic_system/session/get
|
||||
Serving POST /agentic_system/step/get
|
||||
Serving POST /agentic_system/turn/get
|
||||
Listening on :::5000
|
||||
INFO: Started server process [453333]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> Configuration is in `~/.llama/builds/local/conda/8b-instruct-run.yaml`. Feel free to increase `max_seq_len`.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The "local" distribution inference server currently only supports CUDA. It will not work on Apple Silicon machines.
|
||||
|
||||
> [!TIP]
|
||||
> You might need to use the flag `--disable-ipv6` to Disable IPv6 support
|
||||
|
||||
This server is running a Llama model locally.
|
||||
|
||||
### Step 4. Test with Client
|
||||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s /inference/chat_completion API.
|
||||
|
||||
Here is an example output:
|
||||
```
|
||||
User>hello world, write me a 2 sentence poem about the moon
|
||||
Assistant> Here's a 2-sentence poem about the moon:
|
||||
|
||||
The moon glows softly in the midnight sky,
|
||||
A beacon of wonder, as it passes by.
|
||||
```
|
||||
|
||||
Similarly you can test safety (if you configured llama-guard and/or prompt-guard shields) by:
|
||||
|
||||
```
|
||||
python -m llama_stack.apis.safety.client localhost 5000
|
||||
```
|
||||
|
||||
|
||||
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
|
@ -13,20 +13,20 @@ Based on your developer needs, below are references to guides to help you get st
|
|||
* Developer Need: I want to start a local Llama Stack server with my GPU using meta-reference implementations.
|
||||
* Effort: 5min
|
||||
* Guide:
|
||||
- Please see our [Getting Started Guide](./getting_started.md) on starting up a meta-reference Llama Stack server.
|
||||
- Please see our [meta-reference-gpu](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/meta-reference-gpu.html) on starting up a meta-reference Llama Stack server.
|
||||
|
||||
### Llama Stack Server with Remote Providers
|
||||
* Developer need: I want a Llama Stack distribution with a remote provider.
|
||||
* Effort: 10min
|
||||
* Guide
|
||||
- Please see our [Distributions Guide](../distributions/) on starting up distributions with remote providers.
|
||||
- Please see our [Distributions Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/index.html) on starting up distributions with remote providers.
|
||||
|
||||
|
||||
### On-Device (iOS) Llama Stack
|
||||
* Developer Need: I want to use Llama Stack on-Device
|
||||
* Effort: 1.5hr
|
||||
* Guide:
|
||||
- Please see our [iOS Llama Stack SDK](../llama_stack/providers/impls/ios/inference) implementations
|
||||
- Please see our [iOS Llama Stack SDK](./ios_sdk.md) implementations
|
||||
|
||||
### Assemble your own Llama Stack Distribution
|
||||
* Developer Need: I want to assemble my own distribution with API providers to my likings
|
||||
|
@ -38,4 +38,4 @@ Based on your developer needs, below are references to guides to help you get st
|
|||
* Developer Need: I want to add a new API provider to Llama Stack.
|
||||
* Effort: 3hr
|
||||
* Guide
|
||||
- Please see our [Adding a New API Provider](./new_api_provider.md) guide for adding a new API provider.
|
||||
- Please see our [Adding a New API Provider](https://llama-stack.readthedocs.io/en/latest/api_providers/new_api_provider.html) guide for adding a new API provider.
|
|
@ -0,0 +1,9 @@
|
|||
# On-Device Distribution
|
||||
|
||||
On-device distributions are Llama Stack distributions that run locally on your iOS / Android device.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
ios_sdk
|
||||
```
|
|
@ -1,10 +1,66 @@
|
|||
# LocalInference
|
||||
# iOS SDK
|
||||
|
||||
We offer both remote and on-device use of Llama Stack in Swift via two components:
|
||||
|
||||
1. [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/)
|
||||
2. [LocalInferenceImpl](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/impls/ios/inference)
|
||||
|
||||
```{image} ../../../../_static/remote_or_local.gif
|
||||
:alt: Seamlessly switching between local, on-device inference and remote hosted inference
|
||||
:width: 412px
|
||||
:align: center
|
||||
```
|
||||
|
||||
## Remote Only
|
||||
|
||||
If you don't want to run inference on-device, then you can connect to any hosted Llama Stack distribution with #1.
|
||||
|
||||
1. Add `https://github.com/meta-llama/llama-stack-client-swift/` as a Package Dependency in Xcode
|
||||
|
||||
2. Add `LlamaStackClient` as a framework to your app target
|
||||
|
||||
3. Call an API:
|
||||
|
||||
```swift
|
||||
import LlamaStackClient
|
||||
|
||||
let agents = RemoteAgents(url: URL(string: "http://localhost:5000")!)
|
||||
let request = Components.Schemas.CreateAgentTurnRequest(
|
||||
agent_id: agentId,
|
||||
messages: [
|
||||
.UserMessage(Components.Schemas.UserMessage(
|
||||
content: .case1("Hello Llama!"),
|
||||
role: .user
|
||||
))
|
||||
],
|
||||
session_id: self.agenticSystemSessionId,
|
||||
stream: true
|
||||
)
|
||||
|
||||
for try await chunk in try await agents.createTurn(request: request) {
|
||||
let payload = chunk.event.payload
|
||||
// ...
|
||||
```
|
||||
|
||||
Check out [iOSCalendarAssistant](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/ios_calendar_assistant) for a complete app demo.
|
||||
|
||||
## LocalInference
|
||||
|
||||
LocalInference provides a local inference implementation powered by [executorch](https://github.com/pytorch/executorch/).
|
||||
|
||||
Llama Stack currently supports on-device inference for iOS with Android coming soon. You can run on-device inference on Android today using [executorch](https://github.com/pytorch/executorch/tree/main/examples/demo-apps/android/LlamaDemo), PyTorch’s on-device inference library.
|
||||
|
||||
## Installation
|
||||
The APIs *work the same as remote* – the only difference is you'll instead use the `LocalAgents` / `LocalInference` classes and pass in a `DispatchQueue`:
|
||||
|
||||
```swift
|
||||
private let runnerQueue = DispatchQueue(label: "org.llamastack.stacksummary")
|
||||
let inference = LocalInference(queue: runnerQueue)
|
||||
let agents = LocalAgents(inference: self.inference)
|
||||
```
|
||||
|
||||
Check out [iOSCalendarAssistantWithLocalInf](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/ios_calendar_assistant) for a complete app demo.
|
||||
|
||||
### Installation
|
||||
|
||||
We're working on making LocalInference easier to set up. For now, you'll need to import it via `.xcframework`:
|
||||
|
||||
|
@ -54,7 +110,7 @@ We're working on making LocalInference easier to set up. For now, you'll need t
|
|||
$(BUILT_PRODUCTS_DIR)/libbackend_mps-simulator-release.a
|
||||
```
|
||||
|
||||
## Preparing a model
|
||||
### Preparing a model
|
||||
|
||||
1. Prepare a `.pte` file [following the executorch docs](https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md#step-2-prepare-model)
|
||||
2. Bundle the `.pte` and `tokenizer.model` file into your app
|
||||
|
@ -70,7 +126,7 @@ We now support models quantized using SpinQuant and QAT-LoRA which offer a signi
|
|||
| SpinQuant | 10.1 | 5.2 | 0.2 | 0.2 |
|
||||
|
||||
|
||||
## Using LocalInference
|
||||
### Using LocalInference
|
||||
|
||||
1. Instantiate LocalInference with a DispatchQueue. Optionally, pass it into your agents service:
|
||||
|
||||
|
@ -105,7 +161,7 @@ for await chunk in try await agentsService.initAndCreateTurn(
|
|||
) {
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
### Troubleshooting
|
||||
|
||||
If you receive errors like "missing package product" or "invalid checksum", try cleaning the build folder and resetting the Swift package cache:
|
||||
|
|
@ -1,39 +1,23 @@
|
|||
# Fireworks Distribution
|
||||
|
||||
The `llamastack/distribution-` distribution consists of the following provider configurations.
|
||||
The `llamastack/distribution-fireworks` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::fireworks | meta-reference | meta-reference | meta-reference | meta-reference |
|
||||
|
||||
### Step 0. Prerequisite
|
||||
- Make sure you have access to a fireworks API Key. You can get one by visiting [fireworks.ai](https://fireworks.ai/)
|
||||
|
||||
### Start the Distribution (Single Node CPU)
|
||||
### Step 1. Start the Distribution (Single Node CPU)
|
||||
|
||||
#### (Option 1) Start Distribution Via Docker
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted endpoint at Fireworks with API Key.
|
||||
|
||||
```
|
||||
$ cd distributions/fireworks
|
||||
$ ls
|
||||
compose.yaml run.yaml
|
||||
$ docker compose up
|
||||
```
|
||||
|
||||
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: fireworks
|
||||
provider_type: remote::fireworks
|
||||
config:
|
||||
url: https://api.fireworks.ai/inferenc
|
||||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### (Alternative) llama stack run (Single Node CPU)
|
||||
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-fireworks --yaml_config /root/my-run.yaml
|
||||
$ cd distributions/fireworks && docker compose up
|
||||
```
|
||||
|
||||
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.
|
||||
|
@ -43,10 +27,10 @@ inference:
|
|||
provider_type: remote::fireworks
|
||||
config:
|
||||
url: https://api.fireworks.ai/inference
|
||||
api_key: <enter your api key>
|
||||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
**Via Conda**
|
||||
#### (Option 2) Start Distribution Via Conda
|
||||
|
||||
```bash
|
||||
llama stack build --template fireworks --image-type conda
|
||||
|
@ -54,9 +38,10 @@ llama stack build --template fireworks --image-type conda
|
|||
llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
### Model Serving
|
||||
|
||||
Use `llama-stack-client models list` to chekc the available models served by Fireworks.
|
||||
### (Optional) Model Serving
|
||||
|
||||
Use `llama-stack-client models list` to check the available models served by Fireworks.
|
||||
```
|
||||
$ llama-stack-client models list
|
||||
+------------------------------+------------------------------+---------------+------------+
|
|
@ -0,0 +1,15 @@
|
|||
# Remote-Hosted Distribution
|
||||
|
||||
Remote Hosted distributions are distributions connecting to remote hosted services through Llama Stack server. Inference is done through remote providers. These are useful if you have an API key for a remote inference provider like Fireworks, Together, etc.
|
||||
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: |
|
||||
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
fireworks
|
||||
together
|
||||
```
|
|
@ -0,0 +1,62 @@
|
|||
# Together Distribution
|
||||
|
||||
### Connect to a Llama Stack Together Endpoint
|
||||
- You may connect to a hosted endpoint `https://llama-stack.together.ai`, serving a Llama Stack distribution
|
||||
|
||||
The `llamastack/distribution-together` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::together | meta-reference | meta-reference, remote::weaviate | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Docker: Start the Distribution (Single Node CPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted endpoint at Together with API Key.
|
||||
|
||||
```
|
||||
$ cd distributions/together && docker compose up
|
||||
```
|
||||
|
||||
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Together URL server endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: together
|
||||
provider_type: remote::together
|
||||
config:
|
||||
url: https://api.together.xyz/v1
|
||||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### Conda llama stack run (Single Node CPU)
|
||||
|
||||
```bash
|
||||
llama stack build --template together --image-type conda
|
||||
# -- modify run.yaml to a valid Together server endpoint
|
||||
llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
### (Optional) Update Model Serving Configuration
|
||||
|
||||
Use `llama-stack-client models list` to check the available models served by together.
|
||||
|
||||
```
|
||||
$ llama-stack-client models list
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| identifier | llama_model | provider_id | metadata |
|
||||
+==============================+==============================+===============+============+
|
||||
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.1-70B-Instruct | Llama3.1-70B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.1-405B-Instruct | Llama3.1-405B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-3B-Instruct | Llama3.2-3B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
```
|
|
@ -0,0 +1,20 @@
|
|||
# Self-Hosted Distribution
|
||||
|
||||
We offer deployable distributions where you can host your own Llama Stack server using local inference.
|
||||
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: |
|
||||
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
meta-reference-gpu
|
||||
meta-reference-quantized-gpu
|
||||
ollama
|
||||
tgi
|
||||
dell-tgi
|
||||
```
|
|
@ -0,0 +1,71 @@
|
|||
# Meta Reference Distribution
|
||||
|
||||
The `llamastack/distribution-meta-reference-gpu` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | meta-reference | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Step 0. Prerequisite - Downloading Models
|
||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models.
|
||||
|
||||
```
|
||||
$ ls ~/.llama/checkpoints
|
||||
Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3.2-90B-Vision-Instruct Llama-Guard-3-8B
|
||||
Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M
|
||||
```
|
||||
|
||||
### Step 1. Start the Distribution
|
||||
|
||||
#### (Option 1) Start with Docker
|
||||
```
|
||||
$ cd distributions/meta-reference-gpu && docker compose up
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have access to GPU to start a local server with access to your GPU.
|
||||
|
||||
|
||||
> [!NOTE]
|
||||
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
||||
|
||||
|
||||
This will download and start running a pre-built docker container. Alternatively, you may use the following commands:
|
||||
|
||||
```
|
||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
||||
#### (Option 2) Start with Conda
|
||||
|
||||
1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
||||
|
||||
2. Build the `meta-reference-gpu` distribution
|
||||
|
||||
```
|
||||
$ llama stack build --template meta-reference-gpu --image-type conda
|
||||
```
|
||||
|
||||
3. Start running distribution
|
||||
```
|
||||
$ cd distributions/meta-reference-gpu
|
||||
$ llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
### (Optional) Serving a new model
|
||||
You may change the `config.model` in `run.yaml` to update the model currently being served by the distribution. Make sure you have the model checkpoint downloaded in your `~/.llama`.
|
||||
```
|
||||
inference:
|
||||
- provider_id: meta0
|
||||
provider_type: meta-reference
|
||||
config:
|
||||
model: Llama3.2-11B-Vision-Instruct
|
||||
quantization: null
|
||||
torch_seed: null
|
||||
max_seq_len: 4096
|
||||
max_batch_size: 1
|
||||
```
|
||||
|
||||
Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
|
@ -7,7 +7,7 @@ The `llamastack/distribution-ollama` distribution consists of the following prov
|
|||
| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chroma | remote::ollama | meta-reference |
|
||||
|
||||
|
||||
### Start a Distribution (Single Node GPU)
|
||||
### Docker: Start a Distribution (Single Node GPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have access to GPU to start a Ollama server with access to your GPU.
|
||||
|
@ -38,7 +38,7 @@ To kill the server
|
|||
docker compose down
|
||||
```
|
||||
|
||||
### Start the Distribution (Single Node CPU)
|
||||
### Docker: Start the Distribution (Single Node CPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only.
|
||||
|
@ -50,7 +50,7 @@ compose.yaml run.yaml
|
|||
$ docker compose up
|
||||
```
|
||||
|
||||
### (Alternative) ollama run + llama stack run
|
||||
### Conda: ollama run + llama stack run
|
||||
|
||||
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you may use the following commands.
|
||||
|
||||
|
@ -69,12 +69,19 @@ ollama run <model_id>
|
|||
|
||||
#### Start Llama Stack server pointing to Ollama server
|
||||
|
||||
**Via Conda**
|
||||
|
||||
```
|
||||
llama stack build --template ollama --image-type conda
|
||||
llama stack run ./gpu/run.yaml
|
||||
```
|
||||
|
||||
**Via Docker**
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run.yaml:/root/llamastack-run-ollama.yaml --gpus=all llamastack/distribution-ollama --yaml_config /root/llamastack-run-ollama.yaml
|
||||
```
|
||||
|
||||
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g.
|
||||
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Ollama endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: ollama0
|
||||
|
@ -83,14 +90,20 @@ inference:
|
|||
url: http://127.0.0.1:14343
|
||||
```
|
||||
|
||||
**Via Conda**
|
||||
### (Optional) Update Model Serving Configuration
|
||||
|
||||
#### Downloading model via Ollama
|
||||
|
||||
You can use ollama for managing model downloads.
|
||||
|
||||
```
|
||||
llama stack build --template ollama --image-type conda
|
||||
llama stack run ./gpu/run.yaml
|
||||
ollama pull llama3.1:8b-instruct-fp16
|
||||
ollama pull llama3.1:70b-instruct-fp16
|
||||
```
|
||||
|
||||
### Model Serving
|
||||
> [!NOTE]
|
||||
> Please check the [OLLAMA_SUPPORTED_MODELS](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/adapters/inference/ollama/ollama.py) for the supported Ollama models.
|
||||
|
||||
|
||||
To serve a new model with `ollama`
|
||||
```
|
|
@ -8,17 +8,14 @@ The `llamastack/distribution-tgi` distribution consists of the following provide
|
|||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Start the Distribution (Single Node GPU)
|
||||
### Docker: Start the Distribution (Single Node GPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have access to GPU to start a TGI server with access to your GPU.
|
||||
|
||||
|
||||
```
|
||||
$ cd distributions/tgi/gpu
|
||||
$ ls
|
||||
compose.yaml tgi-run.yaml
|
||||
$ docker compose up
|
||||
$ cd distributions/tgi/gpu && docker compose up
|
||||
```
|
||||
|
||||
The script will first start up TGI server, then start up Llama Stack distribution server hooking up to the remote TGI provider for inference. You should be able to see the following outputs --
|
||||
|
@ -37,16 +34,13 @@ To kill the server
|
|||
docker compose down
|
||||
```
|
||||
|
||||
### Start the Distribution (Single Node CPU)
|
||||
### Docker: Start the Distribution (Single Node CPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted endpoint compatible with TGI server.
|
||||
|
||||
```
|
||||
$ cd distributions/tgi/cpu
|
||||
$ ls
|
||||
compose.yaml run.yaml
|
||||
$ docker compose up
|
||||
$ cd distributions/tgi/cpu && docker compose up
|
||||
```
|
||||
|
||||
Replace <ENTER_YOUR_TGI_HOSTED_ENDPOINT> in `run.yaml` file with your TGI endpoint.
|
||||
|
@ -58,20 +52,28 @@ inference:
|
|||
url: <ENTER_YOUR_TGI_HOSTED_ENDPOINT>
|
||||
```
|
||||
|
||||
### (Alternative) TGI server + llama stack run (Single Node GPU)
|
||||
### Conda: TGI server + llama stack run
|
||||
|
||||
If you wish to separately spin up a TGI server, and connect with Llama Stack, you may use the following commands.
|
||||
|
||||
#### (optional) Start TGI server locally
|
||||
#### Start TGI server locally
|
||||
- Please check the [TGI Getting Started Guide](https://github.com/huggingface/text-generation-inference?tab=readme-ov-file#get-started) to get a TGI endpoint.
|
||||
|
||||
```
|
||||
docker run --rm -it -v $HOME/.cache/huggingface:/data -p 5009:5009 --gpus all ghcr.io/huggingface/text-generation-inference:latest --dtype bfloat16 --usage-stats on --sharded false --model-id meta-llama/Llama-3.1-8B-Instruct --port 5009
|
||||
```
|
||||
|
||||
|
||||
#### Start Llama Stack server pointing to TGI server
|
||||
|
||||
**Via Conda**
|
||||
|
||||
```bash
|
||||
llama stack build --template tgi --image-type conda
|
||||
# -- start a TGI server endpoint
|
||||
llama stack run ./gpu/run.yaml
|
||||
```
|
||||
|
||||
**Via Docker**
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-tgi --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
@ -85,15 +87,8 @@ inference:
|
|||
url: http://127.0.0.1:5009
|
||||
```
|
||||
|
||||
**Via Conda**
|
||||
|
||||
```bash
|
||||
llama stack build --template tgi --image-type conda
|
||||
# -- start a TGI server endpoint
|
||||
llama stack run ./gpu/run.yaml
|
||||
```
|
||||
|
||||
### Model Serving
|
||||
### (Optional) Update Model Serving Configuration
|
||||
To serve a new model with `tgi`, change the docker command flag `--model-id <model-to-serve>`.
|
||||
|
||||
This can be done by edit the `command` args in `compose.yaml`. E.g. Replace "Llama-3.2-1B-Instruct" with the model you want to serve.
|
521
docs/source/getting_started/index.md
Normal file
521
docs/source/getting_started/index.md
Normal file
|
@ -0,0 +1,521 @@
|
|||
# Getting Started
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
distributions/self_hosted_distro/index
|
||||
distributions/remote_hosted_distro/index
|
||||
distributions/ondevice_distro/index
|
||||
```
|
||||
|
||||
At the end of the guide, you will have learned how to:
|
||||
- get a Llama Stack server up and running
|
||||
- set up an agent (with tool-calling and vector stores) that works with the above server
|
||||
|
||||
To see more example apps built using Llama Stack, see [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main).
|
||||
|
||||
## Step 1. Starting Up Llama Stack Server
|
||||
|
||||
### Decide Your Build Type
|
||||
There are two ways to start a Llama Stack:
|
||||
|
||||
- **Docker**: we provide a number of pre-built Docker containers allowing you to get started instantly. If you are focused on application development, we recommend this option.
|
||||
- **Conda**: the `llama` CLI provides a simple set of commands to build, configure and run a Llama Stack server containing the exact combination of providers you wish. We have provided various templates to make getting started easier.
|
||||
|
||||
Both of these provide options to run model inference using our reference implementations, Ollama, TGI, vLLM or even remote providers like Fireworks, Together, Bedrock, etc.
|
||||
|
||||
### Decide Your Inference Provider
|
||||
|
||||
Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites.
|
||||
|
||||
- **Do you have access to a machine with powerful GPUs?**
|
||||
If so, we suggest:
|
||||
- [distribution-meta-reference-gpu](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html)
|
||||
- [distribution-tgi](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/tgi.html)
|
||||
|
||||
- **Are you running on a "regular" desktop machine?**
|
||||
If so, we suggest:
|
||||
- [distribution-ollama](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html)
|
||||
|
||||
- **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest:
|
||||
- [distribution-together](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html)
|
||||
- [distribution-fireworks](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html)
|
||||
|
||||
- **Do you want to run Llama Stack inference on your iOS / Android device** If so, we suggest:
|
||||
- [iOS](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/ondevice_distro/ios_sdk.html)
|
||||
- [Android](https://github.com/meta-llama/llama-stack-client-kotlin) (coming soon)
|
||||
|
||||
Please see our pages in detail for the types of distributions we offer:
|
||||
|
||||
1. [Self-Hosted Distribution](./distributions/self_hosted_distro/index.md): If you want to run Llama Stack inference on your local machine.
|
||||
2. [Remote-Hosted Distribution](./distributions/remote_hosted_distro/index.md): If you want to connect to a remote hosted inference provider.
|
||||
3. [On-device Distribution](./distributions/ondevice_distro/index.md): If you want to run Llama Stack inference on your iOS / Android device.
|
||||
|
||||
|
||||
### Quick Start Commands
|
||||
|
||||
Once you have decided on the inference provider and distribution to use, use the following quick start commands to get started.
|
||||
|
||||
##### 1.0 Prerequisite
|
||||
|
||||
```
|
||||
$ git clone git@github.com:meta-llama/llama-stack.git
|
||||
```
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} meta-reference-gpu
|
||||
##### System Requirements
|
||||
Access to Single-Node GPU to start a local server.
|
||||
|
||||
##### Downloading Models
|
||||
Please make sure you have Llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models.
|
||||
|
||||
```
|
||||
$ ls ~/.llama/checkpoints
|
||||
Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3.2-90B-Vision-Instruct Llama-Guard-3-8B
|
||||
Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::{tab-item} tgi
|
||||
##### System Requirements
|
||||
Access to Single-Node GPU to start a TGI server.
|
||||
:::
|
||||
|
||||
:::{tab-item} ollama
|
||||
##### System Requirements
|
||||
Access to Single-Node CPU/GPU able to run ollama.
|
||||
:::
|
||||
|
||||
:::{tab-item} together
|
||||
##### System Requirements
|
||||
Access to Single-Node CPU with Together hosted endpoint via API_KEY from [together.ai](https://api.together.xyz/signin).
|
||||
:::
|
||||
|
||||
:::{tab-item} fireworks
|
||||
##### System Requirements
|
||||
Access to Single-Node CPU with Fireworks hosted endpoint via API_KEY from [fireworks.ai](https://fireworks.ai/).
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
##### 1.1. Start the distribution
|
||||
|
||||
**(Option 1) Via Docker**
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} meta-reference-gpu
|
||||
```
|
||||
$ cd llama-stack/distributions/meta-reference-gpu && docker compose up
|
||||
```
|
||||
|
||||
This will download and start running a pre-built Docker container. Alternatively, you may use the following commands:
|
||||
|
||||
```
|
||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} tgi
|
||||
```
|
||||
$ cd llama-stack/distributions/tgi/gpu && docker compose up
|
||||
```
|
||||
|
||||
The script will first start up TGI server, then start up Llama Stack distribution server hooking up to the remote TGI provider for inference. You should see the following outputs --
|
||||
```
|
||||
[text-generation-inference] | 2024-10-15T18:56:33.810397Z INFO text_generation_router::server: router/src/server.rs:1813: Using config Some(Llama)
|
||||
[text-generation-inference] | 2024-10-15T18:56:33.810448Z WARN text_generation_router::server: router/src/server.rs:1960: Invalid hostname, defaulting to 0.0.0.0
|
||||
[text-generation-inference] | 2024-10-15T18:56:33.864143Z INFO text_generation_router::server: router/src/server.rs:2353: Connected
|
||||
INFO: Started server process [1]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
To kill the server
|
||||
```
|
||||
docker compose down
|
||||
```
|
||||
:::
|
||||
|
||||
|
||||
:::{tab-item} ollama
|
||||
```
|
||||
$ cd llama-stack/distributions/ollama/cpu && docker compose up
|
||||
```
|
||||
|
||||
You will see outputs similar to following ---
|
||||
```
|
||||
[ollama] | [GIN] 2024/10/18 - 21:19:41 | 200 | 226.841µs | ::1 | GET "/api/ps"
|
||||
[ollama] | [GIN] 2024/10/18 - 21:19:42 | 200 | 60.908µs | ::1 | GET "/api/ps"
|
||||
INFO: Started server process [1]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||
[llamastack] | Resolved 12 providers
|
||||
[llamastack] | inner-inference => ollama0
|
||||
[llamastack] | models => __routing_table__
|
||||
[llamastack] | inference => __autorouted__
|
||||
```
|
||||
|
||||
To kill the server
|
||||
```
|
||||
docker compose down
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} fireworks
|
||||
```
|
||||
$ cd llama-stack/distributions/fireworks && docker compose up
|
||||
```
|
||||
|
||||
Make sure your `run.yaml` file has the inference provider pointing to the correct Fireworks URL server endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: fireworks
|
||||
provider_type: remote::fireworks
|
||||
config:
|
||||
url: https://api.fireworks.ai/inference
|
||||
api_key: <optional api key>
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} together
|
||||
```
|
||||
$ cd distributions/together && docker compose up
|
||||
```
|
||||
|
||||
Make sure your `run.yaml` file has the inference provider pointing to the correct Together URL server endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: together
|
||||
provider_type: remote::together
|
||||
config:
|
||||
url: https://api.together.xyz/v1
|
||||
api_key: <optional api key>
|
||||
```
|
||||
:::
|
||||
|
||||
|
||||
::::
|
||||
|
||||
**(Option 2) Via Conda**
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} meta-reference-gpu
|
||||
1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
||||
|
||||
2. Build the `meta-reference-gpu` distribution
|
||||
|
||||
```
|
||||
$ llama stack build --template meta-reference-gpu --image-type conda
|
||||
```
|
||||
|
||||
3. Start running distribution
|
||||
```
|
||||
$ cd llama-stack/distributions/meta-reference-gpu
|
||||
$ llama stack run ./run.yaml
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} tgi
|
||||
1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
||||
|
||||
2. Build the `tgi` distribution
|
||||
|
||||
```bash
|
||||
llama stack build --template tgi --image-type conda
|
||||
```
|
||||
|
||||
3. Start a TGI server endpoint
|
||||
|
||||
4. Make sure in your `run.yaml` file, your `conda_env` is pointing to the conda environment and inference provider is pointing to the correct TGI server endpoint. E.g.
|
||||
```
|
||||
conda_env: llamastack-tgi
|
||||
...
|
||||
inference:
|
||||
- provider_id: tgi0
|
||||
provider_type: remote::tgi
|
||||
config:
|
||||
url: http://127.0.0.1:5009
|
||||
```
|
||||
|
||||
5. Start Llama Stack server
|
||||
```bash
|
||||
llama stack run ./gpu/run.yaml
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} ollama
|
||||
|
||||
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you may use the following commands.
|
||||
|
||||
#### Start Ollama server.
|
||||
- Please check the [Ollama Documentations](https://github.com/ollama/ollama) for more details.
|
||||
|
||||
**Via Docker**
|
||||
```
|
||||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||
```
|
||||
|
||||
**Via CLI**
|
||||
```
|
||||
ollama run <model_id>
|
||||
```
|
||||
|
||||
#### Start Llama Stack server pointing to Ollama server
|
||||
|
||||
Make sure your `run.yaml` file has the inference provider pointing to the correct Ollama endpoint. E.g.
|
||||
```
|
||||
conda_env: llamastack-ollama
|
||||
...
|
||||
inference:
|
||||
- provider_id: ollama0
|
||||
provider_type: remote::ollama
|
||||
config:
|
||||
url: http://127.0.0.1:11434
|
||||
```
|
||||
|
||||
```
|
||||
llama stack build --template ollama --image-type conda
|
||||
llama stack run ./gpu/run.yaml
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::{tab-item} fireworks
|
||||
|
||||
```bash
|
||||
llama stack build --template fireworks --image-type conda
|
||||
# -- modify run.yaml to a valid Fireworks server endpoint
|
||||
llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
Make sure your `run.yaml` file has the inference provider pointing to the correct Fireworks URL server endpoint. E.g.
|
||||
```
|
||||
conda_env: llamastack-fireworks
|
||||
...
|
||||
inference:
|
||||
- provider_id: fireworks
|
||||
provider_type: remote::fireworks
|
||||
config:
|
||||
url: https://api.fireworks.ai/inference
|
||||
api_key: <optional api key>
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} together
|
||||
|
||||
```bash
|
||||
llama stack build --template together --image-type conda
|
||||
# -- modify run.yaml to a valid Together server endpoint
|
||||
llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
Make sure your `run.yaml` file has the inference provider pointing to the correct Together URL server endpoint. E.g.
|
||||
```
|
||||
conda_env: llamastack-together
|
||||
...
|
||||
inference:
|
||||
- provider_id: together
|
||||
provider_type: remote::together
|
||||
config:
|
||||
url: https://api.together.xyz/v1
|
||||
api_key: <optional api key>
|
||||
```
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
##### 1.2 (Optional) Update Model Serving Configuration
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} meta-reference-gpu
|
||||
You may change the `config.model` in `run.yaml` to update the model currently being served by the distribution. Make sure you have the model checkpoint downloaded in your `~/.llama`.
|
||||
```
|
||||
inference:
|
||||
- provider_id: meta0
|
||||
provider_type: meta-reference
|
||||
config:
|
||||
model: Llama3.2-11B-Vision-Instruct
|
||||
quantization: null
|
||||
torch_seed: null
|
||||
max_seq_len: 4096
|
||||
max_batch_size: 1
|
||||
```
|
||||
|
||||
Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||
:::
|
||||
|
||||
:::{tab-item} tgi
|
||||
To serve a new model with `tgi`, change the docker command flag `--model-id <model-to-serve>`.
|
||||
|
||||
This can be done by edit the `command` args in `compose.yaml`. E.g. Replace "Llama-3.2-1B-Instruct" with the model you want to serve.
|
||||
|
||||
```
|
||||
command: ["--dtype", "bfloat16", "--usage-stats", "on", "--sharded", "false", "--model-id", "meta-llama/Llama-3.2-1B-Instruct", "--port", "5009", "--cuda-memory-fraction", "0.3"]
|
||||
```
|
||||
|
||||
or by changing the docker run command's `--model-id` flag
|
||||
```
|
||||
docker run --rm -it -v $HOME/.cache/huggingface:/data -p 5009:5009 --gpus all ghcr.io/huggingface/text-generation-inference:latest --dtype bfloat16 --usage-stats on --sharded false --model-id meta-llama/Llama-3.2-1B-Instruct --port 5009
|
||||
```
|
||||
|
||||
Make sure your `run.yaml` file has the inference provider pointing to the TGI server endpoint serving your model.
|
||||
```
|
||||
inference:
|
||||
- provider_id: tgi0
|
||||
provider_type: remote::tgi
|
||||
config:
|
||||
url: http://127.0.0.1:5009
|
||||
```
|
||||
```
|
||||
|
||||
Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||
:::
|
||||
|
||||
:::{tab-item} ollama
|
||||
You can use ollama for managing model downloads.
|
||||
|
||||
```
|
||||
ollama pull llama3.1:8b-instruct-fp16
|
||||
ollama pull llama3.1:70b-instruct-fp16
|
||||
```
|
||||
|
||||
> Please check the [OLLAMA_SUPPORTED_MODELS](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/adapters/inference/ollama/ollama.py) for the supported Ollama models.
|
||||
|
||||
|
||||
To serve a new model with `ollama`
|
||||
```
|
||||
ollama run <model_name>
|
||||
```
|
||||
|
||||
To make sure that the model is being served correctly, run `ollama ps` to get a list of models being served by ollama.
|
||||
```
|
||||
$ ollama ps
|
||||
|
||||
NAME ID SIZE PROCESSOR UNTIL
|
||||
llama3.1:8b-instruct-fp16 4aacac419454 17 GB 100% GPU 4 minutes from now
|
||||
```
|
||||
|
||||
To verify that the model served by ollama is correctly connected to Llama Stack server
|
||||
```
|
||||
$ llama-stack-client models list
|
||||
+----------------------+----------------------+---------------+-----------------------------------------------+
|
||||
| identifier | llama_model | provider_id | metadata |
|
||||
+======================+======================+===============+===============================================+
|
||||
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | ollama0 | {'ollama_model': 'llama3.1:8b-instruct-fp16'} |
|
||||
+----------------------+----------------------+---------------+-----------------------------------------------+
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} together
|
||||
Use `llama-stack-client models list` to check the available models served by together.
|
||||
|
||||
```
|
||||
$ llama-stack-client models list
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| identifier | llama_model | provider_id | metadata |
|
||||
+==============================+==============================+===============+============+
|
||||
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.1-70B-Instruct | Llama3.1-70B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.1-405B-Instruct | Llama3.1-405B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-3B-Instruct | Llama3.2-3B-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | together0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} fireworks
|
||||
Use `llama-stack-client models list` to check the available models served by Fireworks.
|
||||
```
|
||||
$ llama-stack-client models list
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| identifier | llama_model | provider_id | metadata |
|
||||
+==============================+==============================+===============+============+
|
||||
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.1-70B-Instruct | Llama3.1-70B-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.1-405B-Instruct | Llama3.1-405B-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-1B-Instruct | Llama3.2-1B-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-3B-Instruct | Llama3.2-3B-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | fireworks0 | {} |
|
||||
+------------------------------+------------------------------+---------------+------------+
|
||||
```
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
|
||||
##### Troubleshooting
|
||||
- If you encounter any issues, search through our [GitHub Issues](https://github.com/meta-llama/llama-stack/issues), or file an new issue.
|
||||
- Use `--port <PORT>` flag to use a different port number. For docker run, update the `-p <PORT>:<PORT>` flag.
|
||||
|
||||
|
||||
## Step 2. Run Llama Stack App
|
||||
|
||||
### Chat Completion Test
|
||||
Once the server is set up, we can test it with a client to verify it's working correctly. The following command will send a chat completion request to the server's `/inference/chat_completion` API:
|
||||
|
||||
```bash
|
||||
$ curl http://localhost:5000/inference/chat_completion \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "Llama3.1-8B-Instruct",
|
||||
"messages": [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Write me a 2 sentence poem about the moon"}
|
||||
],
|
||||
"sampling_params": {"temperature": 0.7, "seed": 42, "max_tokens": 512}
|
||||
}'
|
||||
|
||||
Output:
|
||||
{'completion_message': {'role': 'assistant',
|
||||
'content': 'The moon glows softly in the midnight sky, \nA beacon of wonder, as it catches the eye.',
|
||||
'stop_reason': 'out_of_tokens',
|
||||
'tool_calls': []},
|
||||
'logprobs': null}
|
||||
|
||||
```
|
||||
|
||||
### Run Agent App
|
||||
|
||||
To run an agent app, check out examples demo scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo. To run a simple agent app:
|
||||
|
||||
```bash
|
||||
$ git clone git@github.com:meta-llama/llama-stack-apps.git
|
||||
$ cd llama-stack-apps
|
||||
$ pip install -r requirements.txt
|
||||
|
||||
$ python -m examples.agents.client <host> <port>
|
||||
```
|
||||
|
||||
You will see outputs of the form --
|
||||
```
|
||||
User> I am planning a trip to Switzerland, what are the top 3 places to visit?
|
||||
inference> Switzerland is a beautiful country with a rich history, stunning landscapes, and vibrant culture. Here are three must-visit places to add to your itinerary:
|
||||
...
|
||||
|
||||
User> What is so special about #1?
|
||||
inference> Jungfraujoch, also known as the "Top of Europe," is a unique and special place for several reasons:
|
||||
...
|
||||
|
||||
User> What other countries should I consider to club?
|
||||
inference> Considering your interest in Switzerland, here are some neighboring countries that you may want to consider visiting:
|
||||
```
|
|
@ -1,40 +1,93 @@
|
|||
# llama-stack documentation
|
||||
# Llama Stack
|
||||
|
||||
Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. It empowers developers building agentic applications by giving them options to operate in various environments (on-prem, cloud, single-node, on-device) while relying on a standard API interface and the same DevEx that is certified by Meta.
|
||||
Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. It empowers developers building agentic applications by giving them options to operate in various environments (on-prem, cloud, single-node, on-device) while relying on a standard API interface and developer experience that's certified by Meta.
|
||||
|
||||
The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to building and running AI agents in production. Beyond definition, we are building providers for the Llama Stack APIs. These were developing open-source versions and partnering with providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.
|
||||
The Stack APIs are rapidly improving but still a work-in-progress. We invite feedback as well as direct contributions.
|
||||
|
||||
The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.
|
||||
|
||||

|
||||
```{image} ../_static/llama-stack.png
|
||||
:alt: Llama Stack
|
||||
:width: 600px
|
||||
:align: center
|
||||
```
|
||||
|
||||
## APIs
|
||||
|
||||
The Llama Stack consists of the following set of APIs:
|
||||
The set of APIs in Llama Stack can be roughly split into two broad categories:
|
||||
|
||||
- APIs focused on Application development
|
||||
- Inference
|
||||
- Safety
|
||||
- Memory
|
||||
- Agentic System
|
||||
- Evaluation
|
||||
|
||||
- APIs focused on Model development
|
||||
- Evaluation
|
||||
- Post Training
|
||||
- Synthetic Data Generation
|
||||
- Reward Scoring
|
||||
Each of the APIs themselves is a collection of REST endpoints.
|
||||
|
||||
Each API is a collection of REST endpoints.
|
||||
|
||||
## API Providers
|
||||
|
||||
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
||||
A Provider is what makes the API real – they provide the actual implementation backing the API.
|
||||
|
||||
As an example, for Inference, we could have the implementation be backed by open source libraries like [ torch | vLLM | TensorRT ] as possible options.
|
||||
|
||||
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
|
||||
A provider can also be a relay to a remote REST service – ex. cloud providers or dedicated inference providers that serve these APIs.
|
||||
|
||||
## Distribution
|
||||
|
||||
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
||||
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers – some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
||||
|
||||
## Supported Llama Stack Implementations
|
||||
### API Providers
|
||||
| **API Provider Builder** | **Environments** | **Agents** | **Inference** | **Memory** | **Safety** | **Telemetry** |
|
||||
| :----: | :----: | :----: | :----: | :----: | :----: | :----: |
|
||||
| Meta Reference | Single Node | Y | Y | Y | Y | Y |
|
||||
| Fireworks | Hosted | Y | Y | Y | | |
|
||||
| AWS Bedrock | Hosted | | Y | | Y | |
|
||||
| Together | Hosted | Y | Y | | Y | |
|
||||
| Ollama | Single Node | | Y | | |
|
||||
| TGI | Hosted and Single Node | | Y | | |
|
||||
| Chroma | Single Node | | | Y | | |
|
||||
| PG Vector | Single Node | | | Y | | |
|
||||
| PyTorch ExecuTorch | On-device iOS | Y | Y | | |
|
||||
|
||||
### Distributions
|
||||
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: |
|
||||
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference |
|
||||
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
|
||||
## Llama Stack Client SDK
|
||||
|
||||
| **Language** | **Client SDK** | **Package** |
|
||||
| :----: | :----: | :----: |
|
||||
| Python | [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python) | [](https://pypi.org/project/llama_stack_client/)
|
||||
| Swift | [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift) | [](https://swiftpackageindex.com/meta-llama/llama-stack-client-swift)
|
||||
| Node | [llama-stack-client-node](https://github.com/meta-llama/llama-stack-client-node) | [](https://npmjs.org/package/llama-stack-client)
|
||||
| Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) |
|
||||
|
||||
Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from [python](https://github.com/meta-llama/llama-stack-client-python), [node](https://github.com/meta-llama/llama-stack-client-node), [swift](https://github.com/meta-llama/llama-stack-client-swift), and [kotlin](https://github.com/meta-llama/llama-stack-client-kotlin) programming languages to quickly build your applications.
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repo.
|
||||
|
||||
|
||||
```{toctree}
|
||||
cli_reference.md
|
||||
getting_started.md
|
||||
:hidden:
|
||||
:maxdepth: 3
|
||||
|
||||
getting_started/index
|
||||
cli_reference/index
|
||||
cli_reference/download_models
|
||||
api_providers/index
|
||||
distribution_dev/index
|
||||
```
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue