forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Adding Provider sections to docs (some of these will be empty and need updating). This PR is still a draft while I seek feedback from other contributors. I opened it to make the structure visible in the linked GitHub Issue. # Closes https://github.com/meta-llama/llama-stack/issues/1189 - Providers Overview Page  - SQLite-Vec specific page  ## Test Plan N/A [//]: # (## Documentation) --------- Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
31 lines
740 B
Markdown
31 lines
740 B
Markdown
---
|
|
orphan: true
|
|
---
|
|
# Qdrant
|
|
|
|
[Qdrant](https://qdrant.tech/documentation/) is a remote vector database provider for Llama Stack. It
|
|
allows you to store and query vectors directly in memory.
|
|
That means you'll get fast and efficient vector retrieval.
|
|
|
|
## Features
|
|
|
|
- Easy to use
|
|
- Fully integrated with Llama Stack
|
|
|
|
## Usage
|
|
|
|
To use Qdrant in your Llama Stack project, follow these steps:
|
|
|
|
1. Install the necessary dependencies.
|
|
2. Configure your Llama Stack project to use Faiss.
|
|
3. Start storing and querying vectors.
|
|
|
|
## Installation
|
|
|
|
You can install Qdrant using docker:
|
|
|
|
```bash
|
|
docker pull qdrant/qdrant
|
|
```
|
|
## Documentation
|
|
See the [Qdrant documentation](https://qdrant.tech/documentation/) for more details about Qdrant in general.
|