diff --git a/zero_to_hero_guide/quickstart.md b/zero_to_hero_guide/quickstart.md index df8e9abc4..c203f49bc 100644 --- a/zero_to_hero_guide/quickstart.md +++ b/zero_to_hero_guide/quickstart.md @@ -1,3 +1,21 @@ +# Quickstart Guide + +Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast. + +This guide will walk you through how to build a local distribution, using ollama as an inference provider. + +We also have a set of notebooks walking you through how to use Llama-Stack APIs: + +- Inference +- Prompt Engineering +- Chatting with Images +- Tool Calling +- Memory API for RAG +- Safety API +- Agentic API + +Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal. + # Ollama Quickstart Guide This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-1B-Instruct` model. Follow these steps to get started quickly.