From 211a7f8f2870316e0345b567c81d050876a145c3 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Fri, 8 Nov 2024 18:02:43 -0800 Subject: [PATCH] Write some docs --- .../self_hosted_distro/ollama.md | 40 +++++---- .../self_hosted_distro/remote_vllm.md | 83 +++++++++++++++++++ docs/source/getting_started/index.md | 21 +++++ 3 files changed, 126 insertions(+), 18 deletions(-) create mode 100644 docs/source/getting_started/distributions/self_hosted_distro/remote_vllm.md diff --git a/docs/source/getting_started/distributions/self_hosted_distro/ollama.md b/docs/source/getting_started/distributions/self_hosted_distro/ollama.md index 03bc3eb63..37bef9536 100644 --- a/docs/source/getting_started/distributions/self_hosted_distro/ollama.md +++ b/docs/source/getting_started/distributions/self_hosted_distro/ollama.md @@ -2,17 +2,21 @@ The `llamastack/distribution-ollama` distribution consists of the following provider configurations. -| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | -|----------------- |---------------- |---------------- |---------------------------------- |---------------- |---------------- | -| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chroma | remote::ollama | meta-reference | +| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | +|----------------- |---------------- |---------------- |------------------------------------ |---------------- |---------------- | +| **Provider(s)** | remote::ollama | meta-reference | remote::pgvector, remote::chromadb | meta-reference | meta-reference | +## Using Docker Compose + +You can use `docker compose` to start a Ollama server and connect with Llama Stack server in a single command. + ### Docker: Start the Distribution (Single Node regular Desktop machine) > [!NOTE] > This will start an ollama server with CPU only, please see [Ollama Documentations](https://github.com/ollama/ollama) for serving models on CPU only. -``` +```bash $ cd distributions/ollama; docker compose up ``` @@ -21,12 +25,12 @@ $ cd distributions/ollama; docker compose up > [!NOTE] > This assumes you have access to GPU to start a Ollama server with access to your GPU. -``` +```bash $ cd distributions/ollama-gpu; docker compose up ``` You will see outputs similar to following --- -``` +```bash [ollama] | [GIN] 2024/10/18 - 21:19:41 | 200 | 226.841µs | ::1 | GET "/api/ps" [ollama] | [GIN] 2024/10/18 - 21:19:42 | 200 | 60.908µs | ::1 | GET "/api/ps" INFO: Started server process [1] @@ -40,24 +44,24 @@ INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit) ``` To kill the server -``` +```bash docker compose down ``` -### Conda: ollama run + llama stack run +## Starting Ollama and Llama Stack separately -If you wish to separately spin up a Ollama server, and connect with Llama Stack, you may use the following commands. +If you wish to separately spin up a Ollama server, and connect with Llama Stack, you should use the following commands. -#### Start Ollama server. -- Please check the [Ollama Documentations](https://github.com/ollama/ollama) for more details. +#### Start Ollama server +- Please check the [Ollama Documentation](https://github.com/ollama/ollama) for more details. **Via Docker** -``` +```bash docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` **Via CLI** -``` +```bash ollama run ``` @@ -65,7 +69,7 @@ ollama run **Via Conda** -``` +```bash llama stack build --template ollama --image-type conda llama stack run ./gpu/run.yaml ``` @@ -76,7 +80,7 @@ docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run ``` Make sure in your `run.yaml` file, your inference provider is pointing to the correct Ollama endpoint. E.g. -``` +```yaml inference: - provider_id: ollama0 provider_type: remote::ollama @@ -90,7 +94,7 @@ inference: You can use ollama for managing model downloads. -``` +```bash ollama pull llama3.1:8b-instruct-fp16 ollama pull llama3.1:70b-instruct-fp16 ``` @@ -100,7 +104,7 @@ ollama pull llama3.1:70b-instruct-fp16 To serve a new model with `ollama` -``` +```bash ollama run ``` @@ -113,7 +117,7 @@ llama3.1:8b-instruct-fp16 4aacac419454 17 GB 100% GPU 4 minutes fro ``` To verify that the model served by ollama is correctly connected to Llama Stack server -``` +```bash $ llama-stack-client models list +----------------------+----------------------+---------------+-----------------------------------------------+ | identifier | llama_model | provider_id | metadata | diff --git a/docs/source/getting_started/distributions/self_hosted_distro/remote_vllm.md b/docs/source/getting_started/distributions/self_hosted_distro/remote_vllm.md new file mode 100644 index 000000000..2ab8df7b7 --- /dev/null +++ b/docs/source/getting_started/distributions/self_hosted_distro/remote_vllm.md @@ -0,0 +1,83 @@ +# Remote vLLM Distribution + +The `llamastack/distribution-remote-vllm` distribution consists of the following provider configurations. + +| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | +|----------------- |---------------- |---------------- |------------------------------------ |---------------- |---------------- | +| **Provider(s)** | remote::vllm | meta-reference | remote::pgvector, remote::chromadb | meta-reference | meta-reference | + +You can use this distribution if you have GPUs and want to run an independent vLLM server container for running inference. + +## Using Docker Compose + +You can use `docker compose` to start a vLLM container and Llama Stack server container together. + +> [!NOTE] +> This assumes you have access to GPU to start a vLLM server with access to your GPU. + +```bash +$ cd distributions/remote-vllm; docker compose up +``` + +You will see outputs similar to following --- +``` + +``` + +To kill the server +```bash +docker compose down +``` + +## Starting vLLM and Llama Stack separately + +You may want to start a vLLM server and connect with Llama Stack manually. There are two ways to start a vLLM server and connect with Llama Stack. + + +#### Start vLLM server. + +```bash +docker run --runtime nvidia --gpus all \ + -v ~/.cache/huggingface:/root/.cache/huggingface \ + --env "HUGGING_FACE_HUB_TOKEN=" \ + -p 8000:8000 \ + --ipc=host \ + vllm/vllm-openai:latest \ + --model meta-llama/Llama-3.1-8B-Instruct +``` + +Please check the [vLLM Documentation](https://docs.vllm.ai/en/v0.5.5/serving/deploying_with_docker.html) for more details. + + +#### Start Llama Stack server pointing to your vLLM server + + +We have provided a template `run.yaml` file in the `distributions/remote-vllm` directory. Please make sure to modify the `inference.provider_id` to point to your vLLM server endpoint. As an example, if your vLLM server is running on `http://127.0.0.1:8000`, your `run.yaml` file should look like the following: +```yaml +inference: + - provider_id: vllm0 + provider_type: remote::vllm + config: + url: http://127.0.0.1:8000 +``` + +**Via Conda** + +If you are using Conda, you can build and run the Llama Stack server with the following commands: +```bash +cd distributions/remote-vllm +llama stack build --template remote_vllm --image-type conda +llama stack run run.yaml +``` + +**Via Docker** + +You can use the Llama Stack Docker image to start the server with the following command: +```bash +docker run --network host -it -p 5000:5000 \ + -v ~/.llama:/root/.llama \ + -v ./gpu/run.yaml:/root/llamastack-run-remote-vllm.yaml \ + --gpus=all \ + llamastack/distribution-remote-vllm \ + --yaml_config /root/llamastack-run-remote-vllm.yaml +``` diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index afe26b4bd..718bb185c 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -80,6 +80,11 @@ Llama3.1-8B-Instruct Llama3.2-1B Llama3.2-3B-Instruct Llama- ::: +:::{tab-item} vLLM +##### System Requirements +Access to Single-Node GPU to start a vLLM server. +::: + :::{tab-item} tgi ##### System Requirements Access to Single-Node GPU to start a TGI server. @@ -119,6 +124,22 @@ docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run. ``` ::: +:::{tab-item} vLLM +``` +$ cd llama-stack/distributions/remote-vllm && docker compose up +``` + +The script will first start up vLLM server on port 8000, then start up Llama Stack distribution server hooking up to it for inference. You should see the following outputs -- +``` + +``` + +To kill the server +``` +docker compose down +``` +::: + :::{tab-item} tgi ``` $ cd llama-stack/distributions/tgi && docker compose up