llama-stack-mirror/llama_stack/templates/vllm-gpu
2025-01-17 16:22:00 -08:00
..
__init__.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
build.yaml add mcp runtime as default to all providers 2025-01-17 16:22:00 -08:00
run.yaml add mcp runtime as default to all providers 2025-01-17 16:22:00 -08:00
vllm.py add mcp runtime as default to all providers 2025-01-17 16:22:00 -08:00