mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-17 12:52:36 +00:00
# What does this PR do? previously the runpod provider would fail if the RUNPOD_API_TOKEN was not set modify the impl to default to an empty string to align with similar providers' behavior Closes #4296 ## Test Plan Run `uv run llama stack run --providers inference=remote::runpod` with `RUNPOD_API_TOKEN` unset - server now boots where it previously crashed ``` INFO 2025-12-04 13:52:59,920 uvicorn.error:84 uncategorized: Started server process [233656] INFO 2025-12-04 13:52:59,921 uvicorn.error:48 uncategorized: Waiting for application startup. INFO 2025-12-04 13:52:59,926 llama_stack.core.server.server:168 core::server: Starting up Llama Stack server (version: 0.4.0.dev0) INFO 2025-12-04 13:52:59,927 llama_stack.core.stack:495 core: starting registry refresh task INFO 2025-12-04 13:52:59,928 uvicorn.error:62 uncategorized: Application startup complete. INFO 2025-12-04 13:52:59,929 uvicorn.error:216 uncategorized: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit) ``` Signed-off-by: Nathan Weinberg <nweinber@redhat.com> |
||
|---|---|---|
| .. | ||
| agents | ||
| batches | ||
| datasetio | ||
| eval | ||
| external | ||
| files | ||
| inference | ||
| post_training | ||
| safety | ||
| scoring | ||
| tool_runtime | ||
| vector_io | ||
| index.mdx | ||
| openai.mdx | ||
| openai_responses_limitations.mdx | ||