llama-stack-mirror/llama_toolchain/memory
Xi Yan d9147f3184
CLI Update: build -> configure -> run (#69)
* remove configure from build

* remove config from build

* configure to regenerate file

* update memory providers

* remove comments

* udpate build script

* add reedme

* update doc

* rename getting started

* update build cli

* update docker build script

* configure update

* clean up configure

* [tmp fix] hardware requirement tmp fix

* clean up build

* fix configure

* add example build files for conda & docker

* remove resolve_distribution_spec

* remove available_distribution_specs

* example build files

* update example build files

* more clean up on build

* add name args to override name

* move distribution to yaml files

* generate distribution specs

* getting started guide

* getting started

* add build yaml to Dockerfile

* cleanup distribution_dependencies

* configure from  docker image name

* build relative paths

* minor comment

* getting started

* Update getting_started.md

* Update getting_started.md

* address comments, configure within docker file

* remove distribution types!

* update getting started

* update documentation

* remove listing distribution

* minor heading

* address nits, remove docker_image=null

* gitignore
2024-09-16 11:02:26 -07:00
..
adapters Add Chroma and PGVector adapters (#56) 2024-09-06 18:53:17 -07:00
api Support data: in URL for memory. Add ootb support for pdfs (#67) 2024-09-12 13:00:21 -07:00
common CLI Update: build -> configure -> run (#69) 2024-09-16 11:02:26 -07:00
meta_reference Simplified Telemetry API and tying it to logger (#57) 2024-09-11 14:25:37 -07:00
__init__.py Initial commit 2024-07-23 08:32:33 -07:00
client.py Support data: in URL for memory. Add ootb support for pdfs (#67) 2024-09-12 13:00:21 -07:00
providers.py add pypdf 2024-09-13 17:04:43 -07:00