mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-16 06:53:47 +00:00
fireworks
This commit is contained in:
parent
39872ca4b4
commit
7d953d5ee5
3 changed files with 79 additions and 25 deletions
|
@ -8,7 +8,7 @@ The `llamastack/distribution-` distribution consists of the following provider c
|
|||
| **Provider(s)** | remote::fireworks | meta-reference | meta-reference | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Start the Distribution (Single Node CPU)
|
||||
### Docker: Start the Distribution (Single Node CPU)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted endpoint at Fireworks with API Key.
|
||||
|
@ -30,21 +30,7 @@ inference:
|
|||
api_key: <optional api key>
|
||||
```
|
||||
|
||||
### (Alternative) llama stack run (Single Node CPU)
|
||||
|
||||
```
|
||||
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-fireworks --yaml_config /root/my-run.yaml
|
||||
```
|
||||
|
||||
Make sure in you `run.yaml` file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.
|
||||
```
|
||||
inference:
|
||||
- provider_id: fireworks
|
||||
provider_type: remote::fireworks
|
||||
config:
|
||||
url: https://api.fireworks.ai/inference
|
||||
api_key: <enter your api key>
|
||||
```
|
||||
### Conda: llama stack run (Single Node CPU)
|
||||
|
||||
**Via Conda**
|
||||
|
||||
|
@ -54,6 +40,7 @@ llama stack build --template fireworks --image-type conda
|
|||
llama stack run ./run.yaml
|
||||
```
|
||||
|
||||
|
||||
### Model Serving
|
||||
|
||||
Use `llama-stack-client models list` to chekc the available models served by Fireworks.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue