llama-stack-mirror/llama_stack/providers/adapters
Anush 4c3d33e6f4
feat: Qdrant Vector index support (#221)
This PR adds support for Qdrant - https://qdrant.tech/ to be used as a vector memory.

I've unit-tested the methods to confirm that they work as intended.

To run Qdrant

```
docker run -p 6333:6333 qdrant/qdrant
```
2024-10-22 12:50:19 -07:00
..
agents [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
inference add completion() for ollama (#280) 2024-10-21 22:26:33 -07:00
memory feat: Qdrant Vector index support (#221) 2024-10-22 12:50:19 -07:00
safety Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
telemetry [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00