llama-stack-mirror/llama_stack/providers/impls/vllm
Ashwin Bharambe be3adb0964 Make vllm inference better
Tests still don't pass completely (some hang) so I think there are some
potential threading issues maybe
2024-10-25 12:03:42 -07:00
..
__init__.py Fix incorrect completion() signature for Databricks provider (#236) 2024-10-11 08:47:57 -07:00
config.py Make vllm inference better 2024-10-25 12:03:42 -07:00
vllm.py Make vllm inference better 2024-10-25 12:03:42 -07:00