llama-stack-mirror/llama_stack/templates
grs b8f7e1504d
feat: allow the interface on which the server will listen to be configured (#2015)
# What does this PR do?

It may not always be desirable to listen on all interfaces, which is the
default. As an example, by listening instead only on a loopback
interface, the server cannot be reached except from within the host it
is run on. This PR makes this configurable, through a CLI option, an env
var or an entry on the config file.

## Test Plan

I ran a server with and without the added CLI argument to verify that
the argument is used if provided, but the default is as it was before if
not.

Signed-off-by: Gordon Sim <gsim@redhat.com>
2025-05-16 12:59:31 -07:00
..
bedrock feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
cerebras feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
ci-tests feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
dell feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
experimental-post-training fix: fix experimental-post-training template (#1740) 2025-03-20 23:07:19 -07:00
fireworks feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
groq feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
hf-endpoint feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
hf-serverless feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
llama_api feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
meta-reference-gpu feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
nvidia feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
ollama feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
open-benchmark feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
passthrough feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
remote-vllm feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
sambanova feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
starter feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
tgi feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
together feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
verification feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
vllm-gpu feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
watsonx feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
dependencies.json refactor: rename dev distro as starter (#2181) 2025-05-15 12:52:34 -07:00
template.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00