forked from phoenix-oss/llama-stack-mirror
test: Enable logprobs top_k tests for remote::vllm (#1080)
top_k supported was added in https://github.com/meta-llama/llama-stack/pull/1074. The tests should be enabled as well. Verified that tests pass for remote::vllm: ``` LLAMA_STACK_BASE_URL=http://localhost:5003 pytest -v tests/client-sdk/inference/test_text_inference.py -k " test_completion_log_probs_non_streaming or test_completion_log_probs_streaming" ================================================================ test session starts ================================================================ platform linux -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 -- /home/yutang/.conda/envs/distribution-myenv/bin/python3.10 cachedir: .pytest_cache rootdir: /home/yutang/repos/llama-stack configfile: pyproject.toml plugins: anyio-4.8.0 collected 14 items / 12 deselected / 2 selected tests/client-sdk/inference/test_text_inference.py::test_completion_log_probs_non_streaming[meta-llama/Llama-3.1-8B-Instruct] PASSED [ 50%] tests/client-sdk/inference/test_text_inference.py::test_completion_log_probs_streaming[meta-llama/Llama-3.1-8B-Instruct] PASSED [100%] =================================================== 2 passed, 12 deselected, 1 warning in 10.03s ==================================================== ``` Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
This commit is contained in:
parent
8ff27b58fa
commit
efdd60014d
1 changed files with 1 additions and 7 deletions
|
@ -14,13 +14,7 @@ PROVIDER_TOOL_PROMPT_FORMAT = {
|
|||
"remote::vllm": "json",
|
||||
}
|
||||
|
||||
PROVIDER_LOGPROBS_TOP_K = set(
|
||||
{
|
||||
"remote::together",
|
||||
"remote::fireworks",
|
||||
# "remote:vllm"
|
||||
}
|
||||
)
|
||||
PROVIDER_LOGPROBS_TOP_K = {"remote::together", "remote::fireworks", "remote::vllm"}
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue