llama-stack-mirror/docs/source/providers/inference/remote_runpod.md
Sébastien Han c4349f532b
feat: consolidate most distros into "starter" (#2516)
# What does this PR do?

* Removes a bunch of distros
* Removed distros were added into the "starter" distribution
* Doc for "starter" has been added
* Partially reverts https://github.com/meta-llama/llama-stack/pull/2482
  since inference providers are disabled by default and can be turned on
  manually via env variable.
* Disables safety in starter distro

Closes: https://github.com/meta-llama/llama-stack/issues/2502.

~Needs: https://github.com/meta-llama/llama-stack/pull/2482 for Ollama
to work properly in the CI.~

TODO:

- [ ] We can only update `install.sh` when we get a new release.
- [x] Update providers documentation
- [ ] Update notebooks to reference starter instead of ollama

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-07-04 15:58:03 +02:00

472 B

remote::runpod

Description

RunPod inference provider for running models on RunPod's cloud GPU platform.

Configuration

Field Type Required Default Description
url str | None No The URL for the Runpod model serving endpoint
api_token str | None No The API token

Sample Configuration

url: ${env.RUNPOD_URL:=}
api_token: ${env.RUNPOD_API_TOKEN}