forked from phoenix-oss/llama-stack-mirror
# What does this PR do? The builtin implementation of code interpreter is not robust and has a really weak sandboxing shell (the `bubblewrap` container). Given the availability of better MCP code interpreter servers coming up, we should use them instead of baking an implementation into the Stack and expanding the vulnerability surface to the rest of the Stack. This PR only does the removal. We will add examples with how to integrate with MCPs in subsequent ones. ## Test Plan Existing tests.
82 lines
2.4 KiB
Markdown
82 lines
2.4 KiB
Markdown
---
|
|
orphan: true
|
|
---
|
|
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
|
|
# Groq Distribution
|
|
|
|
```{toctree}
|
|
:maxdepth: 2
|
|
:hidden:
|
|
|
|
self
|
|
```
|
|
|
|
The `llamastack/distribution-groq` distribution consists of the following provider configurations.
|
|
|
|
| API | Provider(s) |
|
|
|-----|-------------|
|
|
| agents | `inline::meta-reference` |
|
|
| datasetio | `remote::huggingface`, `inline::localfs` |
|
|
| eval | `inline::meta-reference` |
|
|
| inference | `remote::groq` |
|
|
| safety | `inline::llama-guard` |
|
|
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
|
| telemetry | `inline::meta-reference` |
|
|
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime` |
|
|
| vector_io | `inline::faiss` |
|
|
|
|
|
|
### Environment Variables
|
|
|
|
The following environment variables can be configured:
|
|
|
|
- `LLAMASTACK_PORT`: Port for the Llama Stack distribution server (default: `8321`)
|
|
- `GROQ_API_KEY`: Groq API Key (default: ``)
|
|
|
|
### Models
|
|
|
|
The following models are available by default:
|
|
|
|
- `groq/llama3-8b-8192 (aliases: meta-llama/Llama-3.1-8B-Instruct)`
|
|
- `groq/llama-3.1-8b-instant `
|
|
- `groq/llama3-70b-8192 (aliases: meta-llama/Llama-3-70B-Instruct)`
|
|
- `groq/llama-3.3-70b-versatile (aliases: meta-llama/Llama-3.3-70B-Instruct)`
|
|
- `groq/llama-3.2-3b-preview (aliases: meta-llama/Llama-3.2-3B-Instruct)`
|
|
- `groq/llama-4-scout-17b-16e-instruct (aliases: meta-llama/Llama-4-Scout-17B-16E-Instruct)`
|
|
- `groq/meta-llama/llama-4-scout-17b-16e-instruct (aliases: meta-llama/Llama-4-Scout-17B-16E-Instruct)`
|
|
- `groq/llama-4-maverick-17b-128e-instruct (aliases: meta-llama/Llama-4-Maverick-17B-128E-Instruct)`
|
|
- `groq/meta-llama/llama-4-maverick-17b-128e-instruct (aliases: meta-llama/Llama-4-Maverick-17B-128E-Instruct)`
|
|
|
|
|
|
### Prerequisite: API Keys
|
|
|
|
Make sure you have access to a Groq API Key. You can get one by visiting [Groq](https://api.groq.com/).
|
|
|
|
|
|
## Running Llama Stack with Groq
|
|
|
|
You can do this via Conda (build code) or Docker which has a pre-built image.
|
|
|
|
### Via Docker
|
|
|
|
This method allows you to get started quickly without having to build the distribution code.
|
|
|
|
```bash
|
|
LLAMA_STACK_PORT=8321
|
|
docker run \
|
|
-it \
|
|
--pull always \
|
|
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
|
llamastack/distribution-groq \
|
|
--port $LLAMA_STACK_PORT \
|
|
--env GROQ_API_KEY=$GROQ_API_KEY
|
|
```
|
|
|
|
### Via Conda
|
|
|
|
```bash
|
|
llama stack build --template groq --image-type conda
|
|
llama stack run ./run.yaml \
|
|
--port $LLAMA_STACK_PORT \
|
|
--env GROQ_API_KEY=$GROQ_API_KEY
|
|
```
|