llama-stack-mirror/llama_stack/distribution
2024-09-29 20:00:51 -07:00
..
routers Use inference APIs for running llama guard 2024-09-24 17:02:57 -07:00
server Pull (extract) provider data from the provider instead of pushing from the top (#148) 2024-09-29 20:00:51 -07:00
templates Make TGI adapter compatible with HF Inference API (#97) 2024-09-25 14:08:31 -07:00
utils pre-commit lint 2024-09-28 16:04:41 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
build.py [CLI] remove dependency on CONDA_PREFIX in CLI (#144) 2024-09-28 16:46:47 -07:00
build_conda_env.sh [CLI] remove dependency on CONDA_PREFIX in CLI (#144) 2024-09-28 16:46:47 -07:00
build_container.sh fix #100 2024-09-25 15:11:56 -07:00
common.sh API Updates (#73) 2024-09-17 19:51:35 -07:00
configure.py pre-commit lint 2024-09-28 16:04:41 -07:00
configure_container.sh Add DOCKER_BINARY / DOCKER_OPTS to all scripts 2024-09-19 10:26:41 -07:00
datatypes.py [API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92) 2024-09-23 14:22:22 -07:00
distribution.py Add httpx to core server deps 2024-09-24 10:42:04 -07:00
request_headers.py Pull (extract) provider data from the provider instead of pushing from the top (#148) 2024-09-29 20:00:51 -07:00
start_conda_env.sh API Updates (#73) 2024-09-17 19:51:35 -07:00
start_container.sh Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00