forked from phoenix-oss/llama-stack-mirror
Fix assert message and call to completion_request_to_prompt in remote:vllm (#709)
The current message is incorrect and model arg is not needed in `completion_request_to_prompt`. Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
parent
96d8375663
commit
04d5b9814f
1 changed files with 1 additions and 2 deletions
|
@ -193,10 +193,9 @@ class VLLMInferenceAdapter(Inference, ModelsProtocolPrivate):
|
|||
else:
|
||||
assert (
|
||||
not media_present
|
||||
), "Together does not support media for Completion requests"
|
||||
), "vLLM does not support media for Completion requests"
|
||||
input_dict["prompt"] = await completion_request_to_prompt(
|
||||
request,
|
||||
self.register_helper.get_llama_model(request.model),
|
||||
self.formatter,
|
||||
)
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue