llama-stack-mirror/llama_stack
Yuan Tang 04d5b9814f
Fix assert message and call to completion_request_to_prompt in remote:vllm (#709)
The current message is incorrect and model arg is not needed in
`completion_request_to_prompt`.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-03 13:44:49 -08:00
..
apis [Post training] make validation steps configurable (#715) 2025-01-03 08:43:24 -08:00
cli [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
distribution Fix incorrect entrypoint for broken llama stack run (#706) 2025-01-03 09:47:10 -08:00
providers Fix assert message and call to completion_request_to_prompt in remote:vllm (#709) 2025-01-03 13:44:49 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates Change post training run.yaml inference config (#710) 2025-01-03 08:37:48 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00