llama-stack/llama_stack/templates/nvidia
grs b8f7e1504d
feat: allow the interface on which the server will listen to be configured (#2015)
# What does this PR do?

It may not always be desirable to listen on all interfaces, which is the
default. As an example, by listening instead only on a loopback
interface, the server cannot be reached except from within the host it
is run on. This PR makes this configurable, through a CLI option, an env
var or an entry on the config file.

## Test Plan

I ran a server with and without the added CLI argument to verify that
the argument is used if provided, but the default is as it was before if
not.

Signed-off-by: Gordon Sim <gsim@redhat.com>
2025-05-16 12:59:31 -07:00
..
__init__.py add nvidia distribution (#565) 2025-01-15 14:04:43 -08:00
build.yaml feat: Add NVIDIA NeMo datastore (#1852) 2025-04-28 09:41:59 -07:00
doc_template.md feat: Update NVIDIA to GA docs; remove notebook reference until ready (#1999) 2025-04-18 19:13:18 -04:00
nvidia.py feat: Add NVIDIA NeMo datastore (#1852) 2025-04-28 09:41:59 -07:00
run-with-safety.yaml feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00
run.yaml feat: allow the interface on which the server will listen to be configured (#2015) 2025-05-16 12:59:31 -07:00