mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-16 23:03:49 +00:00
docs
This commit is contained in:
parent
a8dc87b00b
commit
18d175e703
3 changed files with 38 additions and 105 deletions
|
@ -1,15 +1,18 @@
|
|||
# Fireworks Distribution
|
||||
|
||||
The `llamastack/distribution-` distribution consists of the following provider configurations.
|
||||
The `llamastack/distribution-fireworks` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::fireworks | meta-reference | meta-reference | meta-reference | meta-reference |
|
||||
|
||||
### Step 0. Prerequisite
|
||||
- Make sure you have access to a fireworks API Key. You can get one by visiting [fireworks.ai](https://fireworks.ai/)
|
||||
|
||||
### Docker: Start the Distribution (Single Node CPU)
|
||||
### Step 1. Start the Distribution (Single Node CPU)
|
||||
|
||||
#### (Option 1) Start Distribution Via Conda
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted endpoint at Fireworks with API Key.
|
||||
|
||||
|
@ -26,13 +29,11 @@ inference:
|
|||
- provider_id: fireworks
|
||||
provider_type: remote::fireworks
|
||||
config:
|
||||
url: https://api.fireworks.ai/inferenc
|
||||
url: https://api.fireworks.ai/inference
|
||||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### Conda: llama stack run (Single Node CPU)
|
||||
|
||||
**Via Conda**
|
||||
#### (Option 2) Start Distribution Via Conda
|
||||
|
||||
```bash
|
||||
llama stack build --template fireworks --image-type conda
|
||||
|
@ -41,7 +42,7 @@ llama stack run ./run.yaml
|
|||
```
|
||||
|
||||
|
||||
### Model Serving
|
||||
### (Optional) Model Serving
|
||||
|
||||
Use `llama-stack-client models list` to chekc the available models served by Fireworks.
|
||||
```
|
||||
|
|
|
@ -8,8 +8,8 @@ The `llamastack/distribution-meta-reference-gpu` distribution consists of the fo
|
|||
| **Provider(s)** | meta-reference | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Prerequisite
|
||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide]() here to download the models.
|
||||
### Step 0. Prerequisite - Downloading Models
|
||||
Please make sure you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](https://llama-stack.readthedocs.io/en/latest/cli_reference/download_models.html) here to download the models.
|
||||
|
||||
```
|
||||
$ ls ~/.llama/checkpoints
|
||||
|
@ -17,8 +17,9 @@ Llama3.1-8B Llama3.2-11B-Vision-Instruct Llama3.2-1B-Instruct Llama3
|
|||
Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-Guard-3-1B Prompt-Guard-86M
|
||||
```
|
||||
|
||||
### Docker: Start the Distribution
|
||||
### Step 1. Start the Distribution
|
||||
|
||||
#### (Option 1) Start with Docker
|
||||
```
|
||||
$ cd distributions/meta-reference-gpu && docker compose up
|
||||
```
|
||||
|
@ -37,9 +38,9 @@ This will download and start running a pre-built docker container. Alternatively
|
|||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
||||
### Conda: Start the Distribution
|
||||
#### (Option 2) Start with Conda
|
||||
|
||||
1. Install the `llama` CLI. See [CLI Reference]()
|
||||
1. Install the `llama` CLI. See [CLI Reference](https://llama-stack.readthedocs.io/en/latest/cli_reference/index.html)
|
||||
|
||||
2. Build the `meta-reference-gpu` distribution
|
||||
|
||||
|
@ -53,7 +54,7 @@ $ cd distributions/meta-reference-gpu
|
|||
$ llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
### Serving a new model
|
||||
### (Optional) Serving a new model
|
||||
You may change the `config.model` in `run.yaml` to update the model currently being served by the distribution. Make sure you have the model checkpoint downloaded in your `~/.llama`.
|
||||
```
|
||||
inference:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue