llama-stack-mirror/llama_stack/providers
Ashwin Bharambe 70d59b0f5d Make vllm inference better
Tests still don't pass completely (some hang) so I think there are some
potential threading issues maybe
2024-10-24 22:52:47 -07:00
..
adapters completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
impls Make vllm inference better 2024-10-24 22:52:47 -07:00
registry [Evals API][3/n] scoring_functions / scoring meta-reference implementations (#296) 2024-10-24 14:52:30 -07:00
tests completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
utils completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py [Evals API][3/n] scoring_functions / scoring meta-reference implementations (#296) 2024-10-24 14:52:30 -07:00