llama-stack/llama_stack/templates/vllm-gpu
Ashwin Bharambe c9e5578151
[memory refactor][5/n] Migrate all vector_io providers (#835)
See https://github.com/meta-llama/llama-stack/issues/827 for the broader
design.

This PR finishes off all the stragglers and migrates everything to the
new naming.
2025-01-22 10:17:59 -08:00
..
__init__.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
build.yaml [memory refactor][5/n] Migrate all vector_io providers (#835) 2025-01-22 10:17:59 -08:00
run.yaml [memory refactor][5/n] Migrate all vector_io providers (#835) 2025-01-22 10:17:59 -08:00
vllm.py [memory refactor][5/n] Migrate all vector_io providers (#835) 2025-01-22 10:17:59 -08:00