llama-stack-mirror/llama_stack/providers/registry
Francisco Arceo cc19b56c87
chore: OpenAI compatibility for Milvus (#2470)
# What does this PR do?
Closes https://github.com/meta-llama/llama-stack/issues/2461



## Test Plan
Tested with the `ollama` distriubtion template and updated the vector_io
provider to:
```yaml
vector_io:
- provider_id: milvus
  provider_type: inline::milvus
  config:
    db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/milvus_store.db
    kvstore:
      type: sqlite
      db_name: milvus_registry.db
```

Ran the stack
```bash
llama stack run ./llama_stack/templates/ollama/run.yaml --image-type venv --env OLLAMA_URL="http://0.0.0.0:11434"
```

Ran the tests:
```
pytest -sv --stack-config=http://localhost:8321 tests/integration/vector_io/test_openai_vector_stores.py  --embedding-model all-MiniLM-L6-v2
```
Output passed.

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-06-27 16:00:36 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
agents.py feat: add deps dynamically based on metastore config (#2405) 2025-06-05 14:07:25 -07:00
datasetio.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
eval.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
files.py feat: reference implementation for files API (#2330) 2025-06-02 21:54:24 -07:00
inference.py chore: isolate bare minimum project dependencies (#2282) 2025-06-26 10:14:27 +02:00
post_training.py feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00
safety.py chore: isolate bare minimum project dependencies (#2282) 2025-06-26 10:14:27 +02:00
scoring.py chore: isolate bare minimum project dependencies (#2282) 2025-06-26 10:14:27 +02:00
telemetry.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
tool_runtime.py chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
vector_io.py chore: OpenAI compatibility for Milvus (#2470) 2025-06-27 16:00:36 -07:00