forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Fix https://github.com/meta-llama/llama-stack/issues/697 ## Test Plan Run the 405b model. the full `accounts/fireworks/models/<model_id>` is the full model name for Fireworks, the 'fireworks/<model_id>' is just a short hand and sometimes have routing issues ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
2.7 KiB
2.7 KiB
| orphan |
|---|
| true |
Fireworks Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-fireworks distribution consists of the following provider configurations.
| API | Provider(s) |
|---|---|
| agents | inline::meta-reference |
| datasetio | remote::huggingface, inline::localfs |
| eval | inline::meta-reference |
| inference | remote::fireworks |
| memory | inline::faiss, remote::chromadb, remote::pgvector |
| safety | inline::llama-guard |
| scoring | inline::basic, inline::llm-as-judge, inline::braintrust |
| telemetry | inline::meta-reference |
| tool_runtime | remote::brave-search, remote::tavily-search, inline::code-interpreter, inline::memory-runtime |
Environment Variables
The following environment variables can be configured:
LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default:5001)FIREWORKS_API_KEY: Fireworks.AI API Key (default: ``)
Models
The following models are available by default:
meta-llama/Llama-3.1-8B-Instruct (accounts/fireworks/models/llama-v3p1-8b-instruct)meta-llama/Llama-3.1-70B-Instruct (accounts/fireworks/models/llama-v3p1-70b-instruct)meta-llama/Llama-3.1-405B-Instruct-FP8 (accounts/fireworks/models/llama-v3p1-405b-instruct)meta-llama/Llama-3.2-1B-Instruct (accounts/fireworks/models/llama-v3p2-1b-instruct)meta-llama/Llama-3.2-3B-Instruct (accounts/fireworks/models/llama-v3p2-3b-instruct)meta-llama/Llama-3.2-11B-Vision-Instruct (accounts/fireworks/models/llama-v3p2-11b-vision-instruct)meta-llama/Llama-3.2-90B-Vision-Instruct (accounts/fireworks/models/llama-v3p2-90b-vision-instruct)meta-llama/Llama-3.3-70B-Instruct (accounts/fireworks/models/llama-v3p3-70b-instruct)meta-llama/Llama-Guard-3-8B (accounts/fireworks/models/llama-guard-3-8b)meta-llama/Llama-Guard-3-11B-Vision (accounts/fireworks/models/llama-guard-3-11b-vision)
Prerequisite: API Keys
Make sure you have access to a Fireworks API Key. You can get one by visiting fireworks.ai.
Running Llama Stack with Fireworks
You can do this via Conda (build code) or Docker which has a pre-built image.
Via Docker
This method allows you to get started quickly without having to build the distribution code.
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
llamastack/distribution-fireworks \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY
Via Conda
llama stack build --template fireworks --image-type conda
llama stack run ./run.yaml \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY