llama-stack-mirror/llama_stack/providers/remote/inference/vllm
2024-12-17 14:00:43 -08:00
..
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
config.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
vllm.py Fix conversion to RawMessage everywhere 2024-12-17 14:00:43 -08:00