llama-stack-mirror/llama_stack/providers/remote/inference/vllm
2024-11-06 16:07:17 -08:00
..
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
config.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
vllm.py remote::vllm now works with vision models 2024-11-06 16:07:17 -08:00