forked from phoenix-oss/llama-stack-mirror
# What does this PR do? The builtin implementation of code interpreter is not robust and has a really weak sandboxing shell (the `bubblewrap` container). Given the availability of better MCP code interpreter servers coming up, we should use them instead of baking an implementation into the Stack and expanding the vulnerability surface to the rest of the Stack. This PR only does the removal. We will add examples with how to integrate with MCPs in subsequent ones. ## Test Plan Existing tests.
3.1 KiB
3.1 KiB
orphan |
---|
true |
Fireworks Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-fireworks
distribution consists of the following provider configurations.
API | Provider(s) |
---|---|
agents | inline::meta-reference |
datasetio | remote::huggingface , inline::localfs |
eval | inline::meta-reference |
inference | remote::fireworks , inline::sentence-transformers |
safety | inline::llama-guard |
scoring | inline::basic , inline::llm-as-judge , inline::braintrust |
telemetry | inline::meta-reference |
tool_runtime | remote::brave-search , remote::tavily-search , remote::wolfram-alpha , inline::rag-runtime , remote::model-context-protocol |
vector_io | inline::faiss , remote::chromadb , remote::pgvector |
Environment Variables
The following environment variables can be configured:
LLAMA_STACK_PORT
: Port for the Llama Stack distribution server (default:8321
)FIREWORKS_API_KEY
: Fireworks.AI API Key (default: ``)
Models
The following models are available by default:
accounts/fireworks/models/llama-v3p1-8b-instruct (aliases: meta-llama/Llama-3.1-8B-Instruct)
accounts/fireworks/models/llama-v3p1-70b-instruct (aliases: meta-llama/Llama-3.1-70B-Instruct)
accounts/fireworks/models/llama-v3p1-405b-instruct (aliases: meta-llama/Llama-3.1-405B-Instruct-FP8)
accounts/fireworks/models/llama-v3p2-3b-instruct (aliases: meta-llama/Llama-3.2-3B-Instruct)
accounts/fireworks/models/llama-v3p2-11b-vision-instruct (aliases: meta-llama/Llama-3.2-11B-Vision-Instruct)
accounts/fireworks/models/llama-v3p2-90b-vision-instruct (aliases: meta-llama/Llama-3.2-90B-Vision-Instruct)
accounts/fireworks/models/llama-v3p3-70b-instruct (aliases: meta-llama/Llama-3.3-70B-Instruct)
accounts/fireworks/models/llama-guard-3-8b (aliases: meta-llama/Llama-Guard-3-8B)
accounts/fireworks/models/llama-guard-3-11b-vision (aliases: meta-llama/Llama-Guard-3-11B-Vision)
accounts/fireworks/models/llama4-scout-instruct-basic (aliases: meta-llama/Llama-4-Scout-17B-16E-Instruct)
accounts/fireworks/models/llama4-maverick-instruct-basic (aliases: meta-llama/Llama-4-Maverick-17B-128E-Instruct)
nomic-ai/nomic-embed-text-v1.5
Prerequisite: API Keys
Make sure you have access to a Fireworks API Key. You can get one by visiting fireworks.ai.
Running Llama Stack with Fireworks
You can do this via Conda (build code) or Docker which has a pre-built image.
Via Docker
This method allows you to get started quickly without having to build the distribution code.
LLAMA_STACK_PORT=8321
docker run \
-it \
--pull always \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
llamastack/distribution-fireworks \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY
Via Conda
llama stack build --template fireworks --image-type conda
llama stack run ./run.yaml \
--port $LLAMA_STACK_PORT \
--env FIREWORKS_API_KEY=$FIREWORKS_API_KEY