remote vllm distro

This commit is contained in:
Ashwin Bharambe 2024-11-08 11:32:06 -08:00
parent ba82021d4b
commit 02c66b49fc
13 changed files with 188 additions and 18 deletions

View file

@ -7,16 +7,22 @@ The `llamastack/distribution-ollama` distribution consists of the following prov
| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chroma | remote::ollama | meta-reference |
### Docker: Start a Distribution (Single Node GPU)
### Docker: Start the Distribution (Single Node regular Desktop machine)
> [!NOTE]
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only.
```
$ cd distributions/ollama; docker compose up
```
### Docker: Start a Distribution (Single Node with nvidia GPUs)
> [!NOTE]
> This assumes you have access to GPU to start a Ollama server with access to your GPU.
```
$ cd distributions/ollama/gpu
$ ls
compose.yaml run.yaml
$ docker compose up
$ cd distributions/ollama-gpu; docker compose up
```
You will see outputs similar to following ---
@ -38,18 +44,6 @@ To kill the server
docker compose down
```
### Docker: Start the Distribution (Single Node CPU)
> [!NOTE]
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only.
```
$ cd distributions/ollama/cpu
$ ls
compose.yaml run.yaml
$ docker compose up
```
### Conda: ollama run + llama stack run
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you may use the following commands.

View file

@ -144,7 +144,11 @@ docker compose down
:::{tab-item} ollama
```
$ cd llama-stack/distributions/ollama/cpu && docker compose up
$ cd llama-stack/distributions/ollama && docker compose up
# OR
$ cd llama-stack/distributions/ollama-gpu && docker compose up
```
You will see outputs similar to following ---