mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-29 07:14:20 +00:00
Reference quick start option in other playbooks
This commit is contained in:
parent
3ffa12a108
commit
ff5aee807c
5 changed files with 12 additions and 4 deletions
|
@ -17,7 +17,9 @@
|
|||
"\n",
|
||||
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
|
||||
"\n",
|
||||
"In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n"
|
||||
"In this guide, we will showcase how you can build LLM-powered agentic applications using Llama Stack.\n",
|
||||
"\n",
|
||||
"**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -17,7 +17,9 @@
|
|||
"\n",
|
||||
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
|
||||
"\n",
|
||||
"In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n"
|
||||
"In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n",
|
||||
"\n",
|
||||
"**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -17,7 +17,9 @@
|
|||
"\n",
|
||||
"Read more about the project here: https://llama-stack.readthedocs.io/en/latest/index.html\n",
|
||||
"\n",
|
||||
"In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n"
|
||||
"In this guide, we will showcase how you can get started with using Llama 4 in Llama Stack.\n",
|
||||
"\n",
|
||||
"**💡 Quick Start Option:** If you want a simpler and faster way to test out Llama Stack, check out the [quick_start.ipynb](quick_start.ipynb) notebook instead. It provides a streamlined experience for getting up and running in just a few steps.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -359,7 +359,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -8,6 +8,8 @@ environments. You can build and test using a local server first and deploy to a
|
|||
In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)
|
||||
as the inference [provider](../providers/inference/index) for a Llama Model.
|
||||
|
||||
**💡 Notebook Version:** You can also follow this quickstart guide in a Jupyter notebook format: [quick_start.ipynb](https://github.com/meta-llama/llama-stack/blob/main/docs/quick_start.ipynb)
|
||||
|
||||
#### Step 1: Install and setup
|
||||
1. Install [uv](https://docs.astral.sh/uv/)
|
||||
2. Run inference on a Llama model with [Ollama](https://ollama.com/download)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue