mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-22 22:42:25 +00:00
1.5 KiB
1.5 KiB
remote::chromadb
Description
Chroma is an inline and remote vector database provider for Llama Stack. It allows you to store and query vectors directly within a Chroma database. That means you're not limited to storing vectors in memory or in a separate service.
Features
Chroma supports:
- Store embeddings and their metadata
- Vector search
- Full-text search
- Document storage
- Metadata filtering
- Multi-modal retrieval
Usage
To use Chrome in your Llama Stack project, follow these steps:
- Install the necessary dependencies.
- Configure your Llama Stack project to use chroma.
- Start storing and querying vectors.
Installation
You can install chroma using pip:
pip install chromadb
Documentation
See Chroma's documentation for more details about Chroma in general.
Configuration
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
url |
str | None |
No | PydanticUndefined | |
embedding_model |
str | None |
No | Optional default embedding model for this provider. If not specified, will use system default. | |
embedding_dimension |
int | None |
No | Optional embedding dimension override. Only needed for models with variable dimensions (e.g., Matryoshka embeddings). If not specified, will auto-lookup from model registry. |
Sample Configuration
url: ${env.CHROMADB_URL}