forked from phoenix-oss/llama-stack-mirror
# What does this PR do? See https://github.com/meta-llama/llama-stack/pull/1171 which is the original PR. Author: @zc277584121 feat: add [Milvus](https://milvus.io/) vectorDB note: I use the MilvusClient to implement it instead of AsyncMilvusClient, because when I tested AsyncMilvusClient, it would raise issues about evenloop, which I think AsyncMilvusClient SDK is not robust enough to be compatible with llama_stack framework. ## Test Plan have passed the unit test and ene2end test Here is my end2end test logs, including the client code, client log, server logs from inline and remote settings [test_end2end_logs.zip](https://github.com/user-attachments/files/18964391/test_end2end_logs.zip) --------- Signed-off-by: ChengZi <chen.zhang@zilliz.com> Co-authored-by: Cheney Zhang <chen.zhang@zilliz.com> |
||
---|---|---|
.. | ||
__init__.py | ||
agents.py | ||
datasetio.py | ||
eval.py | ||
inference.py | ||
post_training.py | ||
safety.py | ||
scoring.py | ||
telemetry.py | ||
tool_runtime.py | ||
vector_io.py |