llama-stack-mirror/llama_stack/providers/adapters/inference/vllm
Ashwin Bharambe b10e9f46bb
Enable remote::vllm (#384)
* Enable remote::vllm

* Kill the giant list of hard coded models
2024-11-06 14:42:44 -08:00
..
__init__.py Enable remote::vllm (#384) 2024-11-06 14:42:44 -08:00
config.py Enable remote::vllm (#384) 2024-11-06 14:42:44 -08:00
vllm.py Enable remote::vllm (#384) 2024-11-06 14:42:44 -08:00