feat: [new open benchmark] DocVQA (#1647)

# What does this PR do?
DocVQA asks model to look a a picture, then answer a question given in
text, with a text answer by text information in the picture. these
questions often require understanding of relative positions of texts
within the picture.

original dataset is defined in the "Task1" of
https://www.docvqa.org/datasets


## Test Plan
setup llama server with 

```
llama stack run ./llama_stack/templates/open-benchmark/run.yaml
```


then send traffic:

```
 llama-stack-client eval run-benchmark "meta-reference-docvqa"  --model-id   meta-llama/Llama-3.3-70B-Instruct     --output-dir /tmp/gpqa    --num-examples   200
```
This commit is contained in:
yyymeta 2025-03-19 14:56:14 -07:00 committed by GitHub
parent 1902e5754c
commit d117bfe597
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 287 additions and 1 deletions

View file

@ -188,7 +188,7 @@ def test_chat_completion_doesnt_block_event_loop(caplog):
caplog.set_level(logging.WARNING)
# Log when event loop is blocked for more than 200ms
loop.slow_callback_duration = 0.2
loop.slow_callback_duration = 0.5
# Sleep for 500ms in our delayed http response
sleep_time = 0.5