llama-stack-mirror/llama_stack/providers/inline/inference/vllm
2024-11-20 16:09:18 -08:00
..
__init__.py Add provider deprecation support; change directory structure (#397) 2024-11-07 13:04:53 -08:00
config.py Since we are pushing for HF repos, we should accept them in inference configs 2024-11-20 16:09:18 -08:00
vllm.py unregister for memory banks and remove update API (#458) 2024-11-14 17:12:11 -08:00