mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-16 18:22:36 +00:00
Update quickstart.md
This commit is contained in:
parent
cf079a22a0
commit
4deb95eaae
1 changed files with 18 additions and 0 deletions
|
|
@ -1,3 +1,21 @@
|
|||
# Quickstart Guide
|
||||
|
||||
Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast.
|
||||
|
||||
This guide will walk you through how to build a local distribution, using ollama as an inference provider.
|
||||
|
||||
We also have a set of notebooks walking you through how to use Llama-Stack APIs:
|
||||
|
||||
- Inference
|
||||
- Prompt Engineering
|
||||
- Chatting with Images
|
||||
- Tool Calling
|
||||
- Memory API for RAG
|
||||
- Safety API
|
||||
- Agentic API
|
||||
|
||||
Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal.
|
||||
|
||||
# Ollama Quickstart Guide
|
||||
|
||||
This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-1B-Instruct` model. Follow these steps to get started quickly.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue