llama-stack/llama_stack
Yuan Tang dce9a24a6c
test: Add default vLLM URL in remote-vllm template (#1736)
# What does this PR do?

This is to avoid errors like the following when running inference
integration tests:

```
ERROR tests/integration/inference/test_text_inference.py::test_text_completion_stop_sequence[txt=8B-inference:completion:stop_sequence] - llama_stack.distribution.stack.EnvVarError: Environment variable 'VLLM_URL' not set or empty at providers.inference[0].config.url
```

It's also good to have a default, which is consistent with vLLM API
server.

## Test Plan

Integration tests can run without the error above.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-21 07:31:59 -07:00
..
apis chore: mypy violations cleanup for inline::{telemetry,tool_runtime,vector_io} (#1711) 2025-03-20 10:01:10 -07:00
cli refactor: simplify command execution and remove PTY handling (#1641) 2025-03-17 15:03:14 -07:00
distribution feat: make sure agent sessions are under access control (#1737) 2025-03-21 07:31:16 -07:00
models/llama fix: update default tool call system prompt (#1712) 2025-03-19 22:49:24 -07:00
providers feat: make sure agent sessions are under access control (#1737) 2025-03-21 07:31:16 -07:00
strong_typing fix: Support types.UnionType in schemas (#1721) 2025-03-20 09:54:02 -07:00
templates test: Add default vLLM URL in remote-vllm template (#1736) 2025-03-21 07:31:59 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py feat: add support for logging config in the run.yaml (#1408) 2025-03-14 12:36:25 -07:00
schema_utils.py ci: add mypy for static type checking (#1101) 2025-02-21 13:15:40 -08:00