llama-stack-mirror/docs/source/providers/vector_io/remote_weaviate.md
skamenan7 6634b21a76 Merge upstream/main and resolve conflicts
Resolved merge conflicts in:
- Documentation files: updated vector IO provider docs to include both kvstore fields and embedding model configuration
- Config files: merged kvstore requirements from upstream with embedding model fields
- Dependencies: updated to latest client versions while preserving llama-models dependency
- Regenerated lockfiles to ensure consistency

All embedding model configuration features preserved while incorporating upstream changes.
2025-07-16 19:57:02 -04:00

1.5 KiB

remote::weaviate

Description

Weaviate is a vector database provider for Llama Stack. It allows you to store and query vectors directly within a Weaviate database. That means you're not limited to storing vectors in memory or in a separate service.

Features

Weaviate supports:

  • Store embeddings and their metadata
  • Vector search
  • Full-text search
  • Hybrid search
  • Document storage
  • Metadata filtering
  • Multi-modal retrieval

Usage

To use Weaviate in your Llama Stack project, follow these steps:

  1. Install the necessary dependencies.
  2. Configure your Llama Stack project to use chroma.
  3. Start storing and querying vectors.

Installation

To install Weaviate see the Weaviate quickstart documentation.

Documentation

See Weaviate's documentation for more details about Weaviate in general.

Configuration

Field Type Required Default Description
embedding_model str | None No Optional default embedding model for this provider. If not specified, will use system default.
embedding_dimension int | None No Optional embedding dimension override. Only needed for models with variable dimensions (e.g., Matryoshka embeddings). If not specified, will auto-lookup from model registry.

Sample Configuration

kvstore:
  type: sqlite
  db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/weaviate_registry.db