mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
Extracts common OpenAI vector-store code into its own mixin so that all providers can share the same core logic. This also makes it easy for Llama Stack to support both vector-stores and Llama Stack APIs in the interim so that both share the same underlying vector-dbs. Each provider contains storage specific logic to `create / edit / delete / list` vector dbs while the plumbing logic is standardized in the common code. Ensured that this works well with both faiss and sqllite-vec. ### Test Plan ``` llama stack run starter pytest -sv --stack-config http://localhost:8321 tests/integration/vector_io/test_openai_vector_stores.py --embedding-model all-MiniLM-L6-v2 ``` |
||
---|---|---|
.. | ||
agents | ||
batch_inference | ||
benchmarks | ||
common | ||
datasetio | ||
datasets | ||
eval | ||
files | ||
inference | ||
inspect | ||
models | ||
post_training | ||
providers | ||
safety | ||
scoring | ||
scoring_functions | ||
shields | ||
synthetic_data_generation | ||
telemetry | ||
tools | ||
vector_dbs | ||
vector_io | ||
__init__.py | ||
datatypes.py | ||
resource.py | ||
version.py |