mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-06 10:37:22 +00:00
# What does this PR do? previously the runpod provider would fail if the RUNPOD_API_TOKEN was not set modify the impl to default to an empty string to align with similar providers' behavior Closes #4296 ## Test Plan Run `uv run llama stack run --providers inference=remote::runpod` with `RUNPOD_API_TOKEN` unset - server now boots where it previously crashed ``` INFO 2025-12-04 13:52:59,920 uvicorn.error:84 uncategorized: Started server process [233656] INFO 2025-12-04 13:52:59,921 uvicorn.error:48 uncategorized: Waiting for application startup. INFO 2025-12-04 13:52:59,926 llama_stack.core.server.server:168 core::server: Starting up Llama Stack server (version: 0.4.0.dev0) INFO 2025-12-04 13:52:59,927 llama_stack.core.stack:495 core: starting registry refresh task INFO 2025-12-04 13:52:59,928 uvicorn.error:62 uncategorized: Application startup complete. INFO 2025-12-04 13:52:59,929 uvicorn.error:216 uncategorized: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit) ``` Signed-off-by: Nathan Weinberg <nweinber@redhat.com> |
||
|---|---|---|
| .. | ||
| llama_stack | ||
| llama_stack_api | ||
| llama_stack_ui | ||