From 377d545df5e565851e3edb10a301a0006225f638 Mon Sep 17 00:00:00 2001 From: Chirag Modi <98582575+cmodi-meta@users.noreply.github.com> Date: Fri, 14 Mar 2025 15:10:21 -0700 Subject: [PATCH] Update index.md --- docs/source/concepts/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/concepts/index.md b/docs/source/concepts/index.md index 9dee2b859..a94511a0d 100644 --- a/docs/source/concepts/index.md +++ b/docs/source/concepts/index.md @@ -71,4 +71,4 @@ While there is a lot of flexibility to mix-and-match providers, often users will **Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) or [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros. -**On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.) +**On-device Distro**: To run Llama Stack directly on an edge device (mobile phone or a tablet), we provide Distros for [iOS](https://llama-stack.readthedocs.io/en/latest/distributions/ondevice_distro/ios_sdk.html) and [Android](https://llama-stack.readthedocs.io/en/latest/distributions/ondevice_distro/android_sdk.html)