mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-09 19:58:29 +00:00
docs: add addn server guidance for Linux users in Quick Start
Co-authored-by: Mert Parker <mertpaker@gmail.com> Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
This commit is contained in:
parent
7392daddee
commit
56373cc686
1 changed files with 17 additions and 0 deletions
|
@ -58,6 +58,23 @@ docker run -it \
|
||||||
```
|
```
|
||||||
Configuration for this is available at `distributions/ollama/run.yaml`.
|
Configuration for this is available at `distributions/ollama/run.yaml`.
|
||||||
|
|
||||||
|
```{admonition} Note
|
||||||
|
:class: note
|
||||||
|
|
||||||
|
Docker containers run in their own isolated network namespaces on Linux. To allow the container to communicate with services running on the host via `localhost`, you need `--network=host`. This makes the container use the host’s network directly so it can connect to Ollama running on `localhost:11434`.
|
||||||
|
|
||||||
|
Linux users having issues running the above command should instead try the following:
|
||||||
|
```bash
|
||||||
|
docker run -it \
|
||||||
|
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||||
|
-v ~/.llama:/root/.llama \
|
||||||
|
--network=host \
|
||||||
|
llamastack/distribution-ollama \
|
||||||
|
--port $LLAMA_STACK_PORT \
|
||||||
|
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||||
|
--env OLLAMA_URL=http://localhost:11434
|
||||||
|
```
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue