llama-stack-mirror/llama_stack
Dinesh Yeduguru 96e158eaac
Make embedding generation go through inference (#606)
This PR does the following:
1) adds the ability to generate embeddings in all supported inference
providers.
2) Moves all the memory providers to use the inference API and improved
the memory tests to setup the inference stack correctly and use the
embedding models

This is a merge from #589 and #598
2024-12-12 11:47:50 -08:00
..
apis Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
cli doc: llama-stack build --config help text references old directory (#596) 2024-12-10 17:42:02 -08:00
distribution Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
providers Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
scripts Integrate distro docs into the restructured docs 2024-11-20 23:20:05 -08:00
templates Fix issue 586 (#594) 2024-12-10 10:22:04 -08:00
__init__.py Miscellaneous fixes around telemetry, library client and run yaml autogen 2024-12-08 20:40:22 -08:00