diff --git a/docs/getting_started.ipynb b/docs/getting_started.ipynb index cdaf074b8..88878c9be 100644 --- a/docs/getting_started.ipynb +++ b/docs/getting_started.ipynb @@ -17,7 +17,9 @@ "\n", "Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n", "\n", - "In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n" + "In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n", + "\n", + "**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n" ] }, { diff --git a/docs/getting_started_llama4.ipynb b/docs/getting_started_llama4.ipynb index d489b5d06..edefda28c 100644 --- a/docs/getting_started_llama4.ipynb +++ b/docs/getting_started_llama4.ipynb @@ -17,7 +17,9 @@ "\n", "Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n", "\n", - "In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n" + "In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n", + "\n", + "**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n" ] }, { diff --git a/docs/getting_started_llama_api.ipynb b/docs/getting_started_llama_api.ipynb index 128e9114a..e6c74986b 100644 --- a/docs/getting_started_llama_api.ipynb +++ b/docs/getting_started_llama_api.ipynb @@ -17,7 +17,9 @@ "\n", "Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n", "\n", - "In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n" + "In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n", + "\n", + "**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n" ] }, { diff --git a/docs/quick_start.ipynb b/docs/quick_start.ipynb index ff8151b7e..4ae1dbe8d 100644 --- a/docs/quick_start.ipynb +++ b/docs/quick_start.ipynb @@ -359,7 +359,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.13" + "version": "3.10.6" } }, "nbformat": 4, diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index 8382758cc..ea45da1f7 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -8,6 +8,8 @@ environments. You can build and test using a local server first and deploy to a In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/) as the inference [provider](../providers/inference/index) for a Llama Model. +**💡 Notebook Version:** You can also follow this quickstart guide in a Jupyter notebook format: [quick_start.ipynb](https://github.com/meta-llama/llama-stack/blob/main/docs/quick_start.ipynb) + #### Step 1: Install and setup 1. Install [uv](https://docs.astral.sh/uv/) 2. Run inference on a Llama model with [Ollama](https://ollama.com/download)