llama-stack/docs/source/distributions/self_hosted_distro/fireworks.md
2025-01-22 20:22:51 -08:00

2.7 KiB

orphan
true

Fireworks Distribution

:maxdepth: 2
:hidden:

self

The llamastack/distribution-fireworks distribution consists of the following provider configurations.

API Provider(s)
agents inline::meta-reference
datasetio remote::huggingface, inline::localfs
eval inline::meta-reference
inference remote::fireworks
safety inline::llama-guard
scoring inline::basic, inline::llm-as-judge, inline::braintrust
telemetry inline::meta-reference
tool_runtime remote::brave-search, remote::tavily-search, inline::code-interpreter, inline::rag-runtime, remote::model-context-protocol
vector_io inline::faiss, remote::chromadb, remote::pgvector

Environment Variables

The following environment variables can be configured:

  • LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default: 5001)
  • FIREWORKS_API_KEY: Fireworks.AI API Key (default: ``)

Models

The following models are available by default:

  • meta-llama/Llama-3.1-8B-Instruct (accounts/fireworks/models/llama-v3p1-8b-instruct)
  • meta-llama/Llama-3.1-70B-Instruct (accounts/fireworks/models/llama-v3p1-70b-instruct)
  • meta-llama/Llama-3.1-405B-Instruct-FP8 (accounts/fireworks/models/llama-v3p1-405b-instruct)
  • meta-llama/Llama-3.2-1B-Instruct (accounts/fireworks/models/llama-v3p2-1b-instruct)
  • meta-llama/Llama-3.2-3B-Instruct (accounts/fireworks/models/llama-v3p2-3b-instruct)
  • meta-llama/Llama-3.2-11B-Vision-Instruct (accounts/fireworks/models/llama-v3p2-11b-vision-instruct)
  • meta-llama/Llama-3.2-90B-Vision-Instruct (accounts/fireworks/models/llama-v3p2-90b-vision-instruct)
  • meta-llama/Llama-3.3-70B-Instruct (accounts/fireworks/models/llama-v3p3-70b-instruct)
  • meta-llama/Llama-Guard-3-8B (accounts/fireworks/models/llama-guard-3-8b)
  • meta-llama/Llama-Guard-3-11B-Vision (accounts/fireworks/models/llama-guard-3-11b-vision)

Prerequisite: API Keys

Make sure you have access to a Fireworks API Key. You can get one by visiting fireworks.ai.

Running Llama Stack with Fireworks

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=5001
docker run \
  -it \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  llamastack/distribution-fireworks \
  --port $LLAMA_STACK_PORT \
  --env FIREWORKS_API_KEY=$FIREWORKS_API_KEY

Via Conda

llama stack build --template fireworks --image-type conda
llama stack run ./run.yaml \
  --port $LLAMA_STACK_PORT \
  --env FIREWORKS_API_KEY=$FIREWORKS_API_KEY