llama-stack-mirror/llama_stack/templates/remote-vllm
Yuan Tang dce9a24a6c
test: Add default vLLM URL in remote-vllm template (#1736)
# What does this PR do?

This is to avoid errors like the following when running inference
integration tests:

```
ERROR tests/integration/inference/test_text_inference.py::test_text_completion_stop_sequence[txt=8B-inference:completion:stop_sequence] - llama_stack.distribution.stack.EnvVarError: Environment variable 'VLLM_URL' not set or empty at providers.inference[0].config.url
```

It's also good to have a default, which is consistent with vLLM API
server.

## Test Plan

Integration tests can run without the error above.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-21 07:31:59 -07:00
..
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
build.yaml refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
doc_template.md fix: Default to port 8321 everywhere (#1734) 2025-03-20 15:50:41 -07:00
run-with-safety.yaml test: Add default vLLM URL in remote-vllm template (#1736) 2025-03-21 07:31:59 -07:00
run.yaml test: Add default vLLM URL in remote-vllm template (#1736) 2025-03-21 07:31:59 -07:00
vllm.py test: Add default vLLM URL in remote-vllm template (#1736) 2025-03-21 07:31:59 -07:00