llama-stack-mirror/llama_toolchain/memory
Ashwin Bharambe b6a3ef51da Introduce a "Router" layer for providers
Some providers need to be factorized and considered as thin routing
layers on top of other providers. Consider two examples:

- The inference API should be a routing layer over inference providers,
  routed using the "model" key
- The memory banks API is another instance where various memory bank
  types will be provided by independent providers (e.g., a vector store
  is served by Chroma while a keyvalue memory can be served by Redis or
  PGVector)

This commit introduces a generalized routing layer for this purpose.
2024-09-16 17:04:45 -07:00
..
adapters Add Chroma and PGVector adapters (#56) 2024-09-06 18:53:17 -07:00
api Support data: in URL for memory. Add ootb support for pdfs (#67) 2024-09-12 13:00:21 -07:00
common CLI Update: build -> configure -> run (#69) 2024-09-16 11:02:26 -07:00
meta_reference Simplified Telemetry API and tying it to logger (#57) 2024-09-11 14:25:37 -07:00
router Introduce a "Router" layer for providers 2024-09-16 17:04:45 -07:00
__init__.py Initial commit 2024-07-23 08:32:33 -07:00
client.py Introduce a "Router" layer for providers 2024-09-16 17:04:45 -07:00
providers.py Introduce a "Router" layer for providers 2024-09-16 17:04:45 -07:00