llama-stack-mirror/llama_stack/templates/vllm-gpu
2025-04-03 11:14:11 +02:00
..
__init__.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
build.yaml Updated the configuration templates to include the builtin preprocessors. 2025-03-07 16:08:14 +01:00
run.yaml Merge-related changes. 2025-04-02 19:56:44 +02:00
vllm.py Moving preprocessors.py to a separate directory. 2025-04-03 11:14:11 +02:00