llama-stack/docs/source/distributions/self_hosted_distro/groq.md
Ashwin Bharambe 928a39d17b
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
Groq has never supported raw completions anyhow. So this makes it easier
to switch it to LiteLLM. All our test suite passes.

I also updated all the openai-compat providers so they work with api
keys passed from headers. `provider_data`

## Test Plan

```bash
LLAMA_STACK_CONFIG=groq \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
   --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model=""
```

Also tested (openai, anthropic, gemini) providers. No regressions.
2025-02-27 13:16:50 -08:00

2 KiB

orphan
true

Groq Distribution

:maxdepth: 2
:hidden:

self

The llamastack/distribution-groq distribution consists of the following provider configurations.

API Provider(s)
agents inline::meta-reference
datasetio remote::huggingface, inline::localfs
eval inline::meta-reference
inference remote::groq
safety inline::llama-guard
scoring inline::basic, inline::llm-as-judge, inline::braintrust
telemetry inline::meta-reference
tool_runtime remote::brave-search, remote::tavily-search, inline::code-interpreter, inline::rag-runtime
vector_io inline::faiss

Environment Variables

The following environment variables can be configured:

  • LLAMASTACK_PORT: Port for the Llama Stack distribution server (default: 5001)
  • GROQ_API_KEY: Groq API Key (default: ``)

Models

The following models are available by default:

  • meta-llama/Llama-3.1-8B-Instruct (groq/llama3-8b-8192)
  • meta-llama/Llama-3.1-8B-Instruct (groq/llama-3.1-8b-instant)
  • meta-llama/Llama-3-70B-Instruct (groq/llama3-70b-8192)
  • meta-llama/Llama-3.3-70B-Instruct (groq/llama-3.3-70b-versatile)
  • meta-llama/Llama-3.2-3B-Instruct (groq/llama-3.2-3b-preview)

Prerequisite: API Keys

Make sure you have access to a Groq API Key. You can get one by visiting Groq.

Running Llama Stack with Groq

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=5001
docker run \
  -it \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  llamastack/distribution-groq \
  --port $LLAMA_STACK_PORT \
  --env GROQ_API_KEY=$GROQ_API_KEY

Via Conda

llama stack build --template groq --image-type conda
llama stack run ./run.yaml \
  --port $LLAMA_STACK_PORT \
  --env GROQ_API_KEY=$GROQ_API_KEY