llama-stack-mirror/llama_stack/providers/impls/vllm
2024-10-08 17:23:42 -07:00
..
__init__.py Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
config.py Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
vllm.py rename augment_messages 2024-10-08 17:23:42 -07:00