Distributions updates (slight updates to ollama, add inline-vllm and remote-vllm) (#408)

* remote vllm distro

* add inline-vllm details, fix things

* Write some docs
This commit is contained in:
Ashwin Bharambe 2024-11-08 18:09:39 -08:00 committed by GitHub
parent ba82021d4b
commit 4986e46188
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
19 changed files with 365 additions and 46 deletions

View file

@ -2,25 +2,35 @@
The `llamastack/distribution-ollama` distribution consists of the following provider configurations.
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|----------------- |---------------- |---------------- |---------------------------------- |---------------- |---------------- |
| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chroma | remote::ollama | meta-reference |
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|----------------- |---------------- |---------------- |------------------------------------ |---------------- |---------------- |
| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chromadb | meta-reference | meta-reference |
### Docker: Start a Distribution (Single Node GPU)
## Using Docker Compose
You can use `docker compose` to start a Ollama server and connect with Llama Stack server in a single command.
### Docker: Start the Distribution (Single Node regular Desktop machine)
> [!NOTE]
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only.
```bash
$ cd distributions/ollama; docker compose up
```
### Docker: Start a Distribution (Single Node with nvidia GPUs)
> [!NOTE]
> This assumes you have access to GPU to start a Ollama server with access to your GPU.
```
$ cd distributions/ollama/gpu
$ ls
compose.yaml run.yaml
$ docker compose up
```bash
$ cd distributions/ollama-gpu; docker compose up
```
You will see outputs similar to following ---
```
```bash
[ollama] | [GIN] 2024/10/18 - 21:19:41 | 200 | 226.841µs | ::1 | GET "/api/ps"
[ollama] | [GIN] 2024/10/18 - 21:19:42 | 200 | 60.908µs | ::1 | GET "/api/ps"
INFO: Started server process [1]
@ -34,36 +44,24 @@ INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
```
To kill the server
```
```bash
docker compose down
```
### Docker: Start the Distribution (Single Node CPU)
## Starting Ollama and Llama Stack separately
> [!NOTE]
> This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only.
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you should use the following commands.
```
$ cd distributions/ollama/cpu
$ ls
compose.yaml run.yaml
$ docker compose up
```
### Conda: ollama run + llama stack run
If you wish to separately spin up a Ollama server, and connect with Llama Stack, you may use the following commands.
#### Start Ollama server.
- Please check the [Ollama Documentations](https://github.com/ollama/ollama) for more details.
#### Start Ollama server
- Please check the [Ollama Documentation](https://github.com/ollama/ollama) for more details.
**Via Docker**
```
```bash
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```
**Via CLI**
```
```bash
ollama run <model_id>
```
@ -71,7 +69,7 @@ ollama run <model_id>
**Via Conda**
```
```bash
llama stack build --template ollama --image-type conda
llama stack run ./gpu/run.yaml
```
@ -82,7 +80,7 @@ docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run
```
Make sure in your `run.yaml` file, your inference provider is pointing to the correct Ollama endpoint. E.g.
```
```yaml
inference:
- provider_id: ollama0
provider_type: remote::ollama
@ -96,7 +94,7 @@ inference:
You can use ollama for managing model downloads.
```
```bash
ollama pull llama3.1:8b-instruct-fp16
ollama pull llama3.1:70b-instruct-fp16
```
@ -106,7 +104,7 @@ ollama pull llama3.1:70b-instruct-fp16
To serve a new model with `ollama`
```
```bash
ollama run <model_name>
```
@ -119,7 +117,7 @@ llama3.1:8b-instruct-fp16 4aacac419454 17 GB 100% GPU 4 minutes fro
```
To verify that the model served by ollama is correctly connected to Llama Stack server
```
```bash
$ llama-stack-client models list
+----------------------+----------------------+---------------+-----------------------------------------------+
| identifier | llama_model | provider_id | metadata |

View file

@ -0,0 +1,83 @@
# Remote vLLM Distribution
The `llamastack/distribution-remote-vllm` distribution consists of the following provider configurations.
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|----------------- |---------------- |---------------- |------------------------------------ |---------------- |---------------- |
| **Provider(s)** | remote::vllm | meta-reference | remote::pgvector, remote::chromadb | meta-reference | meta-reference |
You can use this distribution if you have GPUs and want to run an independent vLLM server container for running inference.
## Using Docker Compose
You can use `docker compose` to start a vLLM container and Llama Stack server container together.
> [!NOTE]
> This assumes you have access to GPU to start a vLLM server with access to your GPU.
```bash
$ cd distributions/remote-vllm; docker compose up
```
You will see outputs similar to following ---
```
<TO BE FILLED>
```
To kill the server
```bash
docker compose down
```
## Starting vLLM and Llama Stack separately
You may want to start a vLLM server and connect with Llama Stack manually. There are two ways to start a vLLM server and connect with Llama Stack.
#### Start vLLM server.
```bash
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model meta-llama/Llama-3.1-8B-Instruct
```
Please check the [vLLM Documentation](https://docs.vllm.ai/en/v0.5.5/serving/deploying_with_docker.html) for more details.
#### Start Llama Stack server pointing to your vLLM server
We have provided a template `run.yaml` file in the `distributions/remote-vllm` directory. Please make sure to modify the `inference.provider_id` to point to your vLLM server endpoint. As an example, if your vLLM server is running on `http://127.0.0.1:8000`, your `run.yaml` file should look like the following:
```yaml
inference:
- provider_id: vllm0
provider_type: remote::vllm
config:
url: http://127.0.0.1:8000
```
**Via Conda**
If you are using Conda, you can build and run the Llama Stack server with the following commands:
```bash
cd distributions/remote-vllm
llama stack build --template remote_vllm --image-type conda
llama stack run run.yaml
```
**Via Docker**
You can use the Llama Stack Docker image to start the server with the following command:
```bash
docker run --network host -it -p 5000:5000 \
-v ~/.llama:/root/.llama \
-v ./gpu/run.yaml:/root/llamastack-run-remote-vllm.yaml \
--gpus=all \
llamastack/distribution-remote-vllm \
--yaml_config /root/llamastack-run-remote-vllm.yaml
```

View file

@ -80,6 +80,11 @@ Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama-
:::
:::{tab-item} vLLM
##### System Requirements
Access to Single-Node GPU to start a vLLM server.
:::
:::{tab-item} tgi
##### System Requirements
Access to Single-Node GPU to start a TGI server.
@ -119,6 +124,22 @@ docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.
```
:::
:::{tab-item} vLLM
```
$ cd llama-stack/distributions/remote-vllm && docker compose up
```
The script will first start up vLLM server on port 8000, then start up Llama Stack distribution server hooking up to it for inference. You should see the following outputs --
```
<TO BE FILLED>
```
To kill the server
```
docker compose down
```
:::
:::{tab-item} tgi
```
$ cd llama-stack/distributions/tgi && docker compose up
@ -144,7 +165,11 @@ docker compose down
:::{tab-item} ollama
```
$ cd llama-stack/distributions/ollama/cpu && docker compose up
$ cd llama-stack/distributions/ollama && docker compose up
# OR
$ cd llama-stack/distributions/ollama-gpu && docker compose up
```
You will see outputs similar to following ---