Reference quick start option in other playbooks

This commit is contained in:
ChristianZaccaria 2025-07-03 11:53:27 +01:00
parent 3ffa12a108
commit ff5aee807c
5 changed files with 12 additions and 4 deletions

View file

@ -17,7 +17,9 @@
"\n", "\n",
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n", "Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
"\n", "\n",
"In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n" "In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n",
"\n",
"**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n"
] ]
}, },
{ {

View file

@ -17,7 +17,9 @@
"\n", "\n",
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n", "Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
"\n", "\n",
"In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n" "In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n",
"\n",
"**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n"
] ]
}, },
{ {

View file

@ -17,7 +17,9 @@
"\n", "\n",
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n", "Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
"\n", "\n",
"In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n" "In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n",
"\n",
"**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n"
] ]
}, },
{ {

View file

@ -359,7 +359,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.13" "version": "3.10.6"
} }
}, },
"nbformat": 4, "nbformat": 4,

View file

@ -8,6 +8,8 @@ environments. You can build and test using a local server first and deploy to a
In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/) In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)
as the inference [provider](../providers/inference/index) for a Llama Model. as the inference [provider](../providers/inference/index) for a Llama Model.
**💡 Notebook Version:** You can also follow this quickstart guide in a Jupyter notebook format: [quick_start.ipynb](https://github.com/meta-llama/llama-stack/blob/main/docs/quick_start.ipynb)
#### Step 1: Install and setup #### Step 1: Install and setup
1. Install [uv](https://docs.astral.sh/uv/) 1. Install [uv](https://docs.astral.sh/uv/)
2. Run inference on a Llama model with [Ollama](https://ollama.com/download) 2. Run inference on a Llama model with [Ollama](https://ollama.com/download)