forked from phoenix-oss/llama-stack-mirror
2.8 KiB
2.8 KiB
orphan |
---|
true |
Fireworks Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-fireworks
distribution consists of the following provider configurations.
API | Provider(s) |
---|---|
agents | inline::meta-reference |
datasetio | remote::huggingface , inline::localfs |
inference | remote::fireworks , inline::sentence-transformers |
safety | inline::llama-guard |
telemetry | inline::meta-reference |
tool_runtime | remote::brave-search , remote::tavily-search , remote::wolfram-alpha , inline::code-interpreter , inline::rag-runtime , remote::model-context-protocol |
vector_io | inline::faiss , remote::chromadb , remote::pgvector |
Environment Variables
The following environment variables can be configured:
LLAMA_STACK_PORT
: Port for the Llama Stack distribution server (default:8321
)FIREWORKS_API_KEY
: Fireworks.AI API Key (default: ``)
Models
The following models are available by default:
accounts/fireworks/models/llama-v3p1-8b-instruct (aliases: meta-llama/Llama-3.1-8B-Instruct)
accounts/fireworks/models/llama-v3p1-70b-instruct (aliases: meta-llama/Llama-3.1-70B-Instruct)
accounts/fireworks/models/llama-v3p1-405b-instruct (aliases: meta-llama/Llama-3.1-405B-Instruct-FP8)
accounts/fireworks/models/llama-v3p2-3b-instruct (aliases: meta-llama/Llama-3.2-3B-Instruct)
accounts/fireworks/models/llama-v3p2-11b-vision-instruct (aliases: meta-llama/Llama-3.2-11B-Vision-Instruct)
accounts/fireworks/models/llama-v3p2-90b-vision-instruct (aliases: meta-llama/Llama-3.2-90B-Vision-Instruct)
accounts/fireworks/models/llama-v3p3-70b-instruct (aliases: meta-llama/Llama-3.3-70B-Instruct)
accounts/fireworks/models/llama-guard-3-8b (aliases: meta-llama/Llama-Guard-3-8B)
accounts/fireworks/models/llama-guard-3-11b-vision (aliases: meta-llama/Llama-Guard-3-11B-Vision)
nomic-ai/nomic-embed-text-v1.5
Prerequisite: API Keys
Make sure you have access to a Fireworks API Key. You can get one by visiting fireworks.ai.
Running Llama Stack with Fireworks
You can do this via Conda (build code) or Docker which has a pre-built image.
Via Docker
This method allows you to get started quickly without having to build the distribution code.
LLAMA_STACK_PORT=8321
docker run \
-it \
--pull always \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
llamastack/distribution-fireworks \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY
Via Conda
llama stack build --template fireworks --image-type conda
llama stack run ./run.yaml \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY