llama-stack-mirror/docs/docs/distributions/self_hosted_distro/meta-reference-gpu.md
Charlie Doern f22aaef42f
chore!: remove telemetry API usage (#3815)
# What does this PR do?

remove telemetry as a providable API from the codebase. This includes
removing it from generated distributions but also the provider registry,
the router, etc

since `setup_logger` is tied pretty strictly to `Api.telemetry` being in
impls we still need an "instantiated provider" in our implementations.
However it should not be auto-routed or provided. So in
validate_and_prepare_providers (called from resolve_impls) I made it so
that if run_config.telemetry.enabled, we set up the meta-reference
"provider" internally to be used so that log_event will work when
called.

This is the neatest way I think we can remove telemetry from the
provider configs but also not need to rip apart the whole "telemetry is
a provider" logic just yet, but we can do it internally later without
disrupting users.

so telemetry is removed from the registry such that if a user puts
`telemetry:` as an API in their build/run config it will err out, but
can still be used by us internally as we go through this transition.


relates to #3806

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-10-16 10:39:32 -07:00

3.2 KiB

orphan
true

Meta Reference GPU Distribution

:maxdepth: 2
:hidden:

self

The llamastack/distribution-meta-reference-gpu distribution consists of the following provider configurations:

API Provider(s)
agents inline::meta-reference
datasetio remote::huggingface, inline::localfs
eval inline::meta-reference
inference inline::meta-reference
safety inline::llama-guard
scoring inline::basic, inline::llm-as-judge, inline::braintrust
tool_runtime remote::brave-search, remote::tavily-search, inline::rag-runtime, remote::model-context-protocol
vector_io inline::faiss, remote::chromadb, remote::pgvector

Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.

Environment Variables

The following environment variables can be configured:

  • LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default: 8321)
  • INFERENCE_MODEL: Inference model loaded into the Meta Reference server (default: meta-llama/Llama-3.2-3B-Instruct)
  • INFERENCE_CHECKPOINT_DIR: Directory containing the Meta Reference model checkpoint (default: null)
  • SAFETY_MODEL: Name of the safety (Llama-Guard) model to use (default: meta-llama/Llama-Guard-3-1B)
  • SAFETY_CHECKPOINT_DIR: Directory containing the Llama-Guard model checkpoint (default: null)

Prerequisite: Downloading Models

Please check that you have llama model checkpoints downloaded in ~/.llama before proceeding. See installation guide here to download the models using the Hugging Face CLI.


## Running the Distribution

You can do this via venv or Docker which has a pre-built image.

### Via Docker

This method allows you to get started quickly without having to build the distribution code.

```bash
LLAMA_STACK_PORT=8321
docker run \
  -it \
  --pull always \
  --gpu all \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ~/.llama:/root/.llama \
  -e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
  llamastack/distribution-meta-reference-gpu \
  --port $LLAMA_STACK_PORT

If you are using Llama Stack Safety / Shield APIs, use:

docker run \
  -it \
  --pull always \
  --gpu all \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ~/.llama:/root/.llama \
  -e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
  -e SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
  llamastack/distribution-meta-reference-gpu \
  --port $LLAMA_STACK_PORT

Via venv

Make sure you have done uv pip install llama-stack and have the Llama Stack CLI available.

llama stack build --distro meta-reference-gpu --image-type venv
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
llama stack run distributions/meta-reference-gpu/run.yaml \
  --port 8321

If you are using Llama Stack Safety / Shield APIs, use:

INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
  --port 8321