llama-stack/llama_toolchain/inference/meta_reference
Xi Yan d9147f3184
CLI Update: build -> configure -> run (#69)
* remove configure from build

* remove config from build

* configure to regenerate file

* update memory providers

* remove comments

* udpate build script

* add reedme

* update doc

* rename getting started

* update build cli

* update docker build script

* configure update

* clean up configure

* [tmp fix] hardware requirement tmp fix

* clean up build

* fix configure

* add example build files for conda & docker

* remove resolve_distribution_spec

* remove available_distribution_specs

* example build files

* update example build files

* more clean up on build

* add name args to override name

* move distribution to yaml files

* generate distribution specs

* getting started guide

* getting started

* add build yaml to Dockerfile

* cleanup distribution_dependencies

* configure from  docker image name

* build relative paths

* minor comment

* getting started

* Update getting_started.md

* Update getting_started.md

* address comments, configure within docker file

* remove distribution types!

* update getting started

* update documentation

* remove listing distribution

* minor heading

* address nits, remove docker_image=null

* gitignore
2024-09-16 11:02:26 -07:00
..
__init__.py API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51) 2024-09-03 22:39:39 -07:00
config.py Nuke hardware_requirements from SKUs 2024-09-13 16:39:02 -07:00
generation.py CLI Update: build -> configure -> run (#69) 2024-09-16 11:02:26 -07:00
inference.py Remove request wrapper migration (#64) 2024-09-12 15:03:49 -07:00
model_parallel.py Nuke hardware_requirements from SKUs 2024-09-13 16:39:02 -07:00
parallel_utils.py Introduce Llama stack distributions (#22) 2024-08-08 13:38:41 -07:00