mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-27 06:28:50 +00:00
494 B
494 B
orphan |
---|
true |
remote::runpod
Description
RunPod inference provider for running models on RunPod's cloud GPU platform.
Configuration
Field | Type | Required | Default | Description |
---|---|---|---|---|
url |
str | None |
No | The URL for the Runpod model serving endpoint | |
api_token |
str | None |
No | The API token |
Sample Configuration
url: ${env.RUNPOD_URL:=}
api_token: ${env.RUNPOD_API_TOKEN}