mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 02:13:52 +00:00
This adjusts the vllm openai_completion endpoint to also pass a value of 0 for prompt_logprobs, instead of only passing values greater than zero to the backend. The existing test_openai_completion_prompt_logprobs was parameterized to test this case as well. Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| dog.png | ||
| test_embedding.py | ||
| test_openai_completion.py | ||
| test_text_inference.py | ||
| test_vision_inference.py | ||
| vision_test_1.jpg | ||
| vision_test_2.jpg | ||
| vision_test_3.jpg | ||