llama-stack-mirror/llama_stack/providers/utils/memory
Francisco Arceo cc19b56c87
chore: OpenAI compatibility for Milvus (#2470)
# What does this PR do?
Closes https://github.com/meta-llama/llama-stack/issues/2461



## Test Plan
Tested with the `ollama` distriubtion template and updated the vector_io
provider to:
```yaml
vector_io:
- provider_id: milvus
  provider_type: inline::milvus
  config:
    db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/milvus_store.db
    kvstore:
      type: sqlite
      db_name: milvus_registry.db
```

Ran the stack
```bash
llama stack run ./llama_stack/templates/ollama/run.yaml --image-type venv --env OLLAMA_URL="http://0.0.0.0:11434"
```

Ran the tests:
```
pytest -sv --stack-config=http://localhost:8321 tests/integration/vector_io/test_openai_vector_stores.py  --embedding-model all-MiniLM-L6-v2
```
Output passed.

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-06-27 16:00:36 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
file_utils.py Update the "InterleavedTextMedia" type (#635) 2024-12-17 11:18:31 -08:00
openai_vector_store_mixin.py chore: OpenAI compatibility for Milvus (#2470) 2025-06-27 16:00:36 -07:00
vector_store.py feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00