llama-stack/llama_stack/templates
Yuan Tang dce9a24a6c
test: Add default vLLM URL in remote-vllm template (#1736)
# What does this PR do?

This is to avoid errors like the following when running inference
integration tests:

```
ERROR tests/integration/inference/test_text_inference.py::test_text_completion_stop_sequence[txt=8B-inference:completion:stop_sequence] - llama_stack.distribution.stack.EnvVarError: Environment variable 'VLLM_URL' not set or empty at providers.inference[0].config.url
```

It's also good to have a default, which is consistent with vLLM API
server.

## Test Plan

Integration tests can run without the error above.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-21 07:31:59 -07:00
..
bedrock fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
cerebras fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
ci-tests fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
dell fix: docker run with --pull always to fetch the latest image (#1733) 2025-03-20 15:35:48 -07:00
dev fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
experimental-post-training fix: fix experimental-post-training template (#1740) 2025-03-20 23:07:19 -07:00
fireworks fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
groq fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
hf-endpoint fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
hf-serverless fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
meta-reference-gpu fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
meta-reference-quantized-gpu fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
nvidia fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
ollama fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
open-benchmark fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
passthrough fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
remote-vllm test: Add default vLLM URL in remote-vllm template (#1736) 2025-03-21 07:31:59 -07:00
sambanova fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
tgi fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
together fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
vllm-gpu fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00