llama-stack-mirror/llama_stack/providers/remote/memory
Dinesh Yeduguru 96e158eaac
Make embedding generation go through inference (#606)
This PR does the following:
1) adds the ability to generate embeddings in all supported inference
providers.
2) Moves all the memory providers to use the inference API and improved
the memory tests to setup the inference stack correctly and use the
embedding models

This is a merge from #589 and #598
2024-12-12 11:47:50 -08:00
..
chroma Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
pgvector Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
qdrant Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
sample Allow using an "inline" version of Chroma using PersistentClient (#567) 2024-12-11 16:02:04 -08:00
weaviate Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00