diff --git a/docs/source/distributions/building_distro.md b/docs/source/distributions/building_distro.md index 83069aa05..9034a1811 100644 --- a/docs/source/distributions/building_distro.md +++ b/docs/source/distributions/building_distro.md @@ -4,7 +4,7 @@ This guide will walk you through the steps to get started with building a Llama Stack distribution from scratch with your choice of API providers. -## Llama Stack Build +### Llama Stack Build In order to build your own distribution, we recommend you clone the `llama-stack` repository. @@ -373,7 +373,7 @@ After this step is successful, you should be able to find the built container im :::: -## Running your Stack server +### Running your Stack server Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack build` step. ``` diff --git a/docs/source/distributions/index.md b/docs/source/distributions/index.md index 5d84ebd9e..64fec543f 100644 --- a/docs/source/distributions/index.md +++ b/docs/source/distributions/index.md @@ -1,41 +1,27 @@ -# Starting a Llama Stack +# Starting a Llama Stack Server + +You can run a Llama Stack server in one of the following ways: + +**As a Library**: + +This is the simplest way to get started. Using Llama Stack as a library means you do not need to start a server. This is especially useful when you are not running inference locally and relying on an external inference service (eg. fireworks, together, groq, etc.) See [Using Llama Stack as a Library](importing_as_library) + + +**Docker**: + +Another simple way to start interacting with Llama Stack is to just spin up docker which is pre-built with all the providers you need. We provide a number of pre-built Docker containers so you can start a Llama Stack server instantly. You can also build your own custom Docker container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](distributions/selection) for more details. + + +**Conda**: + +Lastly, if you have a custom or an advanced setup or you are developing on Llama Stackyou can also build a custom Llama Stack server. Using `llama stack build` and `llama stack run` you can build/run a custom Llama Stack server containing the exact combination of providers you wish. We have also provided various templates to make getting started easier. See [Building a Custom Distribution](building_distro) for more details. + + ```{toctree} -:maxdepth: 3 +:maxdepth: 1 :hidden: importing_as_library building_distro configuration ``` - -You can instantiate a Llama Stack in one of the following ways: -- **As a Library**: this is the simplest, especially if you are using an external inference service. See [Using Llama Stack as a Library](importing_as_library) -- **Docker**: we provide a number of pre-built Docker containers so you can start a Llama Stack server instantly. You can also build your own custom Docker container. -- **Conda**: finally, you can build a custom Llama Stack server using `llama stack build` containing the exact combination of providers you wish. We have provided various templates to make getting started easier. - -Which templates / distributions to choose depends on the hardware you have for running LLM inference. - -- **Do you have access to a machine with powerful GPUs?** -If so, we suggest: - - {dockerhub}`distribution-remote-vllm` ([Guide](self_hosted_distro/remote-vllm)) - - {dockerhub}`distribution-meta-reference-gpu` ([Guide](self_hosted_distro/meta-reference-gpu)) - - {dockerhub}`distribution-tgi` ([Guide](self_hosted_distro/tgi)) - - {dockerhub} `distribution-nvidia` ([Guide](self_hosted_distro/nvidia)) - -- **Are you running on a "regular" desktop machine?** -If so, we suggest: - - {dockerhub}`distribution-ollama` ([Guide](self_hosted_distro/ollama)) - -- **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest: - - {dockerhub}`distribution-together` ([Guide](self_hosted_distro/together)) - - {dockerhub}`distribution-fireworks` ([Guide](self_hosted_distro/fireworks)) - -- **Do you want to run Llama Stack inference on your iOS / Android device** If so, we suggest: - - [iOS SDK](ondevice_distro/ios_sdk) - - [Android](ondevice_distro/android_sdk) - -- **Do you want a hosted Llama Stack endpoint?** If so, we suggest: - - [Remote-Hosted Llama Stack Endpoints](remote_hosted_distro/index) - - -You can also build your own [custom distribution](building_distro). diff --git a/docs/source/distributions/ondevice_distro/ios_sdk.md b/docs/source/distributions/ondevice_distro/ios_sdk.md index c9d3a89b5..ffaf74533 100644 --- a/docs/source/distributions/ondevice_distro/ios_sdk.md +++ b/docs/source/distributions/ondevice_distro/ios_sdk.md @@ -1,6 +1,3 @@ ---- -orphan: true ---- # iOS SDK We offer both remote and on-device use of Llama Stack in Swift via two components: diff --git a/docs/source/distributions/remote_hosted_distro/index.md b/docs/source/distributions/remote_hosted_distro/index.md index 0f86bf73f..2fbe381af 100644 --- a/docs/source/distributions/remote_hosted_distro/index.md +++ b/docs/source/distributions/remote_hosted_distro/index.md @@ -1,6 +1,3 @@ ---- -orphan: true ---- # Remote-Hosted Distributions Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to. diff --git a/docs/source/distributions/selection.md b/docs/source/distributions/selection.md new file mode 100644 index 000000000..08c3e985a --- /dev/null +++ b/docs/source/distributions/selection.md @@ -0,0 +1,56 @@ +# List of Distributions + +Here are a list of distributions you can use to start a Llama Stack server that are provided out of the box. + +## Selection of a Distribution / Template + +Which templates / distributions to choose depends on the hardware you have for running LLM inference. + +- **Do you want a hosted Llama Stack endpoint?** If so, we suggest leveraging our partners who host Llama Stack endpoints. Namely, _fireworks.ai_ and _together.xyz_. + - Read more about it here - [Remote-Hosted Endpoints](remote_hosted_distro/index). + + +- **Do you have access to machines with GPUs?** If you wish to run Llama Stack locally or on a cloud instance and host your own Llama Stack endpoint, we suggest: + - {dockerhub}`distribution-remote-vllm` ([Guide](self_hosted_distro/remote-vllm)) + - {dockerhub}`distribution-meta-reference-gpu` ([Guide](self_hosted_distro/meta-reference-gpu)) + - {dockerhub}`distribution-tgi` ([Guide](self_hosted_distro/tgi)) + - {dockerhub}`distribution-nvidia` ([Guide](self_hosted_distro/nvidia)) + +- **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama templte for quick prototyping and get started without having to worry about needing GPUs. + - {dockerhub}`distribution-ollama` ([link](self_hosted_distro/ollama)) + +- **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest: + - {dockerhub}`distribution-together` ([Guide](self_hosted_distro/together)) + - {dockerhub}`distribution-fireworks` ([Guide](self_hosted_distro/fireworks)) + +- **Do you want to run Llama Stack inference on your iOS / Android device** Lastly, we also provide templates for running Llama Stack inference on your iOS / Android device: + - [iOS SDK](ondevice_distro/ios_sdk) + - [Android](ondevice_distro/android_sdk) + + +- **If none of the above fit your needs, you can also build your own [custom distribution](building_distro).** + +### Distribution Details + +```{toctree} +:maxdepth: 1 + +remote_hosted_distro/index +self_hosted_distro/remote-vllm +self_hosted_distro/meta-reference-gpu +self_hosted_distro/tgi +self_hosted_distro/nvidia +self_hosted_distro/ollama +self_hosted_distro/together +self_hosted_distro/fireworks +ondevice_distro/index +``` + +### On-Device Distributions + +```{toctree} +:maxdepth: 1 + +ondevice_distro/ios_sdk +ondevice_distro/android_sdk +``` diff --git a/docs/source/distributions/self_hosted_distro/ollama.md b/docs/source/distributions/self_hosted_distro/ollama.md index b03a5ee16..93f4adfb3 100644 --- a/docs/source/distributions/self_hosted_distro/ollama.md +++ b/docs/source/distributions/self_hosted_distro/ollama.md @@ -1,6 +1,3 @@ ---- -orphan: true ---- # Ollama Distribution ```{toctree} diff --git a/docs/source/distributions/self_hosted_distro/remote-vllm.md b/docs/source/distributions/self_hosted_distro/remote-vllm.md index 95dd392c1..1638e9b11 100644 --- a/docs/source/distributions/self_hosted_distro/remote-vllm.md +++ b/docs/source/distributions/self_hosted_distro/remote-vllm.md @@ -1,6 +1,3 @@ ---- -orphan: true ---- # Remote vLLM Distribution ```{toctree} :maxdepth: 2 diff --git a/docs/source/distributions/self_hosted_distro/tgi.md b/docs/source/distributions/self_hosted_distro/tgi.md index 1883b926c..5a709d0a8 100644 --- a/docs/source/distributions/self_hosted_distro/tgi.md +++ b/docs/source/distributions/self_hosted_distro/tgi.md @@ -1,7 +1,3 @@ ---- -orphan: true ---- - # TGI Distribution ```{toctree} diff --git a/docs/source/distributions/self_hosted_distro/together.md b/docs/source/distributions/self_hosted_distro/together.md index 2d5c8fc77..707f5be7a 100644 --- a/docs/source/distributions/self_hosted_distro/together.md +++ b/docs/source/distributions/self_hosted_distro/together.md @@ -1,6 +1,3 @@ ---- -orphan: true ---- # Together Distribution ```{toctree} diff --git a/docs/source/index.md b/docs/source/index.md index 7e7977738..bc4666be3 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -57,6 +57,7 @@ introduction/index getting_started/index concepts/index distributions/index +distributions/selection building_applications/index benchmark_evaluations/index playground/index