Update quickstart.md

This commit is contained in:
Sanyam Bhutani 2024-11-21 06:37:21 -08:00
parent cf079a22a0
commit 4deb95eaae

View file

@ -1,3 +1,21 @@
# Quickstart Guide
Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast.
This guide will walk you through how to build a local distribution, using ollama as an inference provider.
We also have a set of notebooks walking you through how to use Llama-Stack APIs:
- Inference
- Prompt Engineering
- Chatting with Images
- Tool Calling
- Memory API for RAG
- Safety API
- Agentic API
Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal.
# Ollama Quickstart Guide # Ollama Quickstart Guide
This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-1B-Instruct` model. Follow these steps to get started quickly. This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-1B-Instruct` model. Follow these steps to get started quickly.