llama-stack-mirror/llama_stack
Ashwin Bharambe 70d59b0f5d Make vllm inference better
Tests still don't pass completely (some hang) so I think there are some
potential threading issues maybe
2024-10-24 22:52:47 -07:00
..
apis [Evals API][3/n] scoring_functions / scoring meta-reference implementations (#296) 2024-10-24 14:52:30 -07:00
cli llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
distribution start_container.sh prefix llamastack->distribution name 2024-10-24 21:29:17 -07:00
providers Make vllm inference better 2024-10-24 22:52:47 -07:00
scripts Add a test for CLI, but not fully done so disabled 2024-09-19 13:27:07 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00