llama-stack/tests/client-sdk/inference
Yuan Tang efdd60014d
test: Enable logprobs top_k tests for remote::vllm (#1080)
top_k supported was added in
https://github.com/meta-llama/llama-stack/pull/1074. The tests should be
enabled as well.

Verified that tests pass for remote::vllm:

```
LLAMA_STACK_BASE_URL=http://localhost:5003 pytest -v tests/client-sdk/inference/test_text_inference.py -k " test_completion_log_probs_non_streaming or test_completion_log_probs_streaming"
================================================================ test session starts ================================================================
platform linux -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 -- /home/yutang/.conda/envs/distribution-myenv/bin/python3.10
cachedir: .pytest_cache
rootdir: /home/yutang/repos/llama-stack
configfile: pyproject.toml
plugins: anyio-4.8.0
collected 14 items / 12 deselected / 2 selected                                                                                                     

tests/client-sdk/inference/test_text_inference.py::test_completion_log_probs_non_streaming[meta-llama/Llama-3.1-8B-Instruct] PASSED           [ 50%]
tests/client-sdk/inference/test_text_inference.py::test_completion_log_probs_streaming[meta-llama/Llama-3.1-8B-Instruct] PASSED               [100%]

=================================================== 2 passed, 12 deselected, 1 warning in 10.03s ====================================================
```

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-13 13:44:57 -05:00
..
__init__.py [tests] add client-sdk pytests & delete client.py (#638) 2024-12-16 12:04:56 -08:00
dog.png fix vllm base64 image inference (#815) 2025-01-17 17:07:28 -08:00
test_text_inference.py test: Enable logprobs top_k tests for remote::vllm (#1080) 2025-02-13 13:44:57 -05:00
test_vision_inference.py test: replace blocked image URLs with GitHub-hosted (#1025) 2025-02-10 22:38:11 -05:00