mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-25 17:58:04 +00:00
* Removes a bunch of distros * Removed distros were added into the "starter" distribution * Doc for "starter" has been added * Partially reverts https://github.com/meta-llama/llama-stack/pull/2482 since inference providers are disabled by default and can be turned on manually via env variable. * Disables safety in starter distro Closes: #2502 Signed-off-by: Sébastien Han <seb@redhat.com>
21 lines
472 B
Markdown
21 lines
472 B
Markdown
# remote::runpod
|
|
|
|
## Description
|
|
|
|
RunPod inference provider for running models on RunPod's cloud GPU platform.
|
|
|
|
## Configuration
|
|
|
|
| Field | Type | Required | Default | Description |
|
|
|-------|------|----------|---------|-------------|
|
|
| `url` | `str \| None` | No | | The URL for the Runpod model serving endpoint |
|
|
| `api_token` | `str \| None` | No | | The API token |
|
|
|
|
## Sample Configuration
|
|
|
|
```yaml
|
|
url: ${env.RUNPOD_URL:=}
|
|
api_token: ${env.RUNPOD_API_TOKEN}
|
|
|
|
```
|
|
|