mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
# What does this PR do? allows template distribution connect to hosted or local NIM: use --env NVIDIA_BASE_URL=http://localhost:8000 to connect to a local NIM running at localhost:8000 use --env NVIDIA_API_KEY=blah when connecting to hosted NIM, e.g. NVIDIA_BASE_URL=https://integrate.api.nvidia.com ## Test Plan - `llama stack run ./llama_stack/templates/nvidia/run.yaml` -> error, e.g. API key is required for hosted NVIDIA NIM - `llama stack run ./llama_stack/templates/nvidia/run.yaml --env NVIDIA_BASE_URL=https://integrate.api.nvidia.com` -> error, e.g. API key is required for hosted NVIDIA NIM - `llama stack run ./llama_stack/templates/nvidia/run.yaml --env NVIDIA_API_KEY=REDACTED` -> successful connection to NIM on https://integrate.api.nvidia.com - `llama stack run ./llama_stack/templates/nvidia/run.yaml --env NVIDIA_BASE_URL=https://integrate.api.nvidia.com --env NVIDIA_API_KEY=REDACTED` -> successful connection to NIM running on integrate.api.nvidia.com - `llama stack run ./llama_stack/templates/nvidia/run.yaml --env NVIDIA_BASE_URL=http://localhost:8000` -> successful connection to NIM running on localhost:8000 - `llama stack run ./llama_stack/templates/nvidia/run.yaml --env NVIDIA_BASE_URL=http://localhost:8000 --env NVIDIA_API_KEY=REDACTED` -> successful connection to NIM running on http://localhost:8000 - `llama stack run ./llama_stack/templates/nvidia/run.yaml --env NVIDIA_BASE_URL=http://bogus` -> runtime error, e.g. ConnectionError (TODO: this should be a startup error) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Ran pre-commit to handle lint / formatting issues. - [x] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. |
||
|---|---|---|
| .. | ||
| bedrock | ||
| cerebras | ||
| experimental-post-training | ||
| fireworks | ||
| hf-endpoint | ||
| hf-serverless | ||
| meta-reference-gpu | ||
| meta-reference-quantized-gpu | ||
| nvidia | ||
| ollama | ||
| remote-vllm | ||
| sambanova | ||
| tgi | ||
| together | ||
| vllm-gpu | ||
| __init__.py | ||
| template.py | ||