llama-stack/llama_stack
Ashwin Bharambe f8f2f7f9bb
feat: Add HTTPS serving option (#1000)
# What does this PR do?

Enables HTTPS option for Llama Stack. 

While doing so, introduces a `ServerConfig` sub-structure to house all
server related configuration (port, ssl, etc.)

Also simplified the `start_container.sh` entrypoint to simply be
`python` instead of a complex bash command line.

## Test Plan

Conda: 

Run:
```bash
$ llama stack build --template together
$ llama stack run --port 8322        # ensure server starts 

$ llama-stack-client configure --endpoint http://localhost:8322
$ llama-stack-client models list
```

Create a self-signed SSL key / cert pair. Then, using a local checkout
of `llama-stack-client-python`, change
https://github.com/meta-llama/llama-stack-client-python/blob/main/src/llama_stack_client/_base_client.py#L759
to add `kwargs.setdefault("verify", False)` so SSL verification is
disabled. Then:

```bash
$ llama stack run --port 8322 --tls-keyfile <KEYFILE> --tls-certfile <CERTFILE>
$ llama-stack-client configure --endpoint https://localhost:8322  # notice the `https`
$ llama-stack-client models list
```

Also tested with containers (but of course one needs to make sure the
cert and key files are appropriately provided to the container.)
2025-02-07 09:39:08 -08:00
..
apis sys_prompt support in Agent (#938) 2025-02-05 21:11:32 -08:00
cli feat: Add HTTPS serving option (#1000) 2025-02-07 09:39:08 -08:00
distribution feat: Add HTTPS serving option (#1000) 2025-02-07 09:39:08 -08:00
providers chore: add missing ToolConfig import in groq.py (#983) 2025-02-07 09:35:00 -08:00
scripts Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
templates feat: Add a new template for dell (#978) 2025-02-06 14:14:39 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00