mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-02 16:54:42 +00:00
Update quickstart.md
This commit is contained in:
parent
3de8d94966
commit
4c295796bc
1 changed files with 28 additions and 1 deletions
|
@ -1,3 +1,23 @@
|
|||
# Quickstart Guide
|
||||
|
||||
Llama-Stack allows you to configure your distribution from various providers, allowing you to focus on going from zero to production super fast.
|
||||
|
||||
This guide will walk you through how to build a local distribution, using ollama as an inference provider.
|
||||
|
||||
We also have a set of notebooks walking you through how to use Llama-Stack APIs:
|
||||
|
||||
- Inference
|
||||
- Prompt Engineering
|
||||
- Chatting with Images
|
||||
- Tool Calling
|
||||
- Memory API for RAG
|
||||
- Safety API
|
||||
- Agentic API
|
||||
|
||||
Below, we will learn how to get started with Ollama as an inference provider, please note the steps for configuring your provider will vary a little depending on the service. However, the user experience will remain universal-this is the power of Llama-Stack.
|
||||
|
||||
Prototype locally using Ollama, deploy to the cloud with your favorite provider or own deployment. Use any API from any provider while focussing on development.
|
||||
|
||||
# Ollama Quickstart Guide
|
||||
|
||||
This guide will walk you through setting up an end-to-end workflow with Llama Stack with ollama, enabling you to perform text generation using the `Llama3.2-1B-Instruct` model. Follow these steps to get started quickly.
|
||||
|
@ -82,6 +102,13 @@ If you're looking for more specific topics like tool calling or agent setup, we
|
|||
llama stack build --template ollama --image-type conda
|
||||
```
|
||||
|
||||
After this step, you will see the console output:
|
||||
```
|
||||
Build Successful! Next steps:
|
||||
1. Set the environment variables: LLAMASTACK_PORT, OLLAMA_URL, INFERENCE_MODEL, SAFETY_MODEL
|
||||
2. `llama stack run /Users/username/.llama/distributions/llamastack-ollama/ollama-run.yaml`
|
||||
```
|
||||
|
||||
2. **Edit Configuration**:
|
||||
- Modify the `ollama-run.yaml` file located at `/Users/yourusername/.llama/distributions/llamastack-ollama/ollama-run.yaml`:
|
||||
- Change the `chromadb` port to `8000`.
|
||||
|
@ -214,4 +241,4 @@ This command initializes the model to interact with your local Llama Stack insta
|
|||
**Explore Example Apps**: Check out [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) for example applications built using Llama Stack.
|
||||
|
||||
|
||||
---
|
||||
---
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue