mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 16:42:00 +00:00
chore: Enabling Milvus for VectorIO CI
Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
This commit is contained in:
parent
709eb7da33
commit
c8d41d45ec
115 changed files with 2919 additions and 184 deletions
|
|
@ -6,7 +6,7 @@ Llama Stack is a stateful service with REST APIs to support the seamless transit
|
|||
environments. You can build and test using a local server first and deploy to a hosted endpoint for production.
|
||||
|
||||
In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)
|
||||
as the inference [provider](../providers/index.md#inference) for a Llama Model.
|
||||
as the inference [provider](../providers/inference/index) for a Llama Model.
|
||||
|
||||
#### Step 1: Install and setup
|
||||
1. Install [uv](https://docs.astral.sh/uv/)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue