diff --git a/docs/source/distributions/index.md b/docs/source/distributions/index.md index ee7f4f23c..1f766e75e 100644 --- a/docs/source/distributions/index.md +++ b/docs/source/distributions/index.md @@ -14,7 +14,12 @@ Another simple way to start interacting with Llama Stack is to just spin up a co **Conda**: -Lastly, if you have a custom or an advanced setup or you are developing on Llama Stack you can also build a custom Llama Stack server. Using `llama stack build` and `llama stack run` you can build/run a custom Llama Stack server containing the exact combination of providers you wish. We have also provided various templates to make getting started easier. See [Building a Custom Distribution](building_distro) for more details. +If you have a custom or an advanced setup or you are developing on Llama Stack you can also build a custom Llama Stack server. Using `llama stack build` and `llama stack run` you can build/run a custom Llama Stack server containing the exact combination of providers you wish. We have also provided various templates to make getting started easier. See [Building a Custom Distribution](building_distro) for more details. + + +**Kubernetes**: + +If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](kubernetes_deployment) for more details. ```{toctree} @@ -25,4 +30,5 @@ importing_as_library building_distro configuration selection +kubernetes_deployment ``` diff --git a/docs/source/distributions/kubernetes_deployment.md b/docs/source/distributions/kubernetes_deployment.md new file mode 100644 index 000000000..cd307c111 --- /dev/null +++ b/docs/source/distributions/kubernetes_deployment.md @@ -0,0 +1,192 @@ +# Kubernetes Deployment Guide + +Instead of starting the Llama Stack and vLLM servers locally. We can deploy them in a Kubernetes cluster. In this guide, we'll use a local [Kind](https://kind.sigs.k8s.io/) cluster and a vLLM inference service in the same cluster for demonstration purposes. + +First, create a local Kubernetes cluster via Kind: + +```bash +kind create cluster --image kindest/node:v1.32.0 --name llama-stack-test +``` + +Start vLLM server as a Kubernetes Pod and Service (remember to replace `` with your actual token and `` to meet your local system architecture): + +```bash +cat <" +--- +apiVersion: v1 +kind: Pod +metadata: + name: vllm-server + labels: + app: vllm +spec: + containers: + - name: llama-stack + image: + command: + - bash + - -c + - | + MODEL="meta-llama/Llama-3.2-1B-Instruct" + MODEL_PATH=/app/model/$(basename $MODEL) + huggingface-cli login --token $HUGGING_FACE_HUB_TOKEN + huggingface-cli download $MODEL --local-dir $MODEL_PATH --cache-dir $MODEL_PATH + python3 -m vllm.entrypoints.openai.api_server --model $MODEL_PATH --served-model-name $MODEL --port 8000 + ports: + - containerPort: 8000 + volumeMounts: + - name: llama-storage + mountPath: /app/model + env: + - name: HUGGING_FACE_HUB_TOKEN + valueFrom: + secretKeyRef: + name: hf-token-secret + key: token + volumes: + - name: llama-storage + persistentVolumeClaim: + claimName: vllm-models +--- +apiVersion: v1 +kind: Service +metadata: + name: vllm-server +spec: + selector: + app: vllm + ports: + - port: 8000 + targetPort: 8000 + type: NodePort +EOF +``` + +We can verify that the vLLM server has started successfully via the logs (this might take a couple of minutes to download the model): + +```bash +$ kubectl logs vllm-server +... +INFO: Started server process [1] +INFO: Waiting for application startup. +INFO: Application startup complete. +INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) +``` + +Then we can modify the Llama Stack run configuration YAML with the following inference provider: + +```yaml +providers: + inference: + - provider_id: vllm + provider_type: remote::vllm + config: + url: http://vllm-server.default.svc.cluster.local:8000/v1 + max_tokens: 4096 + api_token: fake +``` + +Once we have defined the run configuration for Llama Stack, we can build an image with that configuration and the server source code: + +```bash +cat >/tmp/test-vllm-llama-stack/Containerfile.llama-stack-run-k8s <