llama-stack-mirror/llama_stack/templates/vllm-gpu
Ashwin Bharambe 0f5db647fe no-inline
2025-05-01 14:25:07 -07:00
..
__init__.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
build.yaml no-inline 2025-05-01 14:25:07 -07:00
run.yaml no-inline 2025-05-01 14:25:07 -07:00
vllm.py no-inline 2025-05-01 14:25:07 -07:00