forked from phoenix-oss/llama-stack-mirror
# What does this PR do? See https://github.com/meta-llama/llama-stack/pull/1171 which is the original PR. Author: @zc277584121 feat: add [Milvus](https://milvus.io/) vectorDB note: I use the MilvusClient to implement it instead of AsyncMilvusClient, because when I tested AsyncMilvusClient, it would raise issues about evenloop, which I think AsyncMilvusClient SDK is not robust enough to be compatible with llama_stack framework. ## Test Plan have passed the unit test and ene2end test Here is my end2end test logs, including the client code, client log, server logs from inline and remote settings [test_end2end_logs.zip](https://github.com/user-attachments/files/18964391/test_end2end_logs.zip) --------- Signed-off-by: ChengZi <chen.zhang@zilliz.com> Co-authored-by: Cheney Zhang <chen.zhang@zilliz.com>
31 lines
782 B
Markdown
31 lines
782 B
Markdown
---
|
|
orphan: true
|
|
---
|
|
# Milvus
|
|
|
|
[Milvus](https://milvus.io/) is an inline and remote vector database provider for Llama Stack. It
|
|
allows you to store and query vectors directly within a Milvus database.
|
|
That means you're not limited to storing vectors in memory or in a separate service.
|
|
|
|
## Features
|
|
|
|
- Easy to use
|
|
- Fully integrated with Llama Stack
|
|
|
|
## Usage
|
|
|
|
To use Milvus in your Llama Stack project, follow these steps:
|
|
|
|
1. Install the necessary dependencies.
|
|
2. Configure your Llama Stack project to use Milvus.
|
|
3. Start storing and querying vectors.
|
|
|
|
## Installation
|
|
|
|
You can install Milvus using pymilvus:
|
|
|
|
```bash
|
|
pip install pymilvus
|
|
```
|
|
## Documentation
|
|
See the [Milvus documentation](https://milvus.io/docs/install-overview.md) for more details about Milvus in general.
|