llama-stack-mirror/llama_stack/providers/inline
yyymeta d117bfe597
feat: [new open benchmark] DocVQA (#1647)
# What does this PR do?
DocVQA asks model to look a a picture, then answer a question given in
text, with a text answer by text information in the picture. these
questions often require understanding of relative positions of texts
within the picture.

original dataset is defined in the "Task1" of
https://www.docvqa.org/datasets


## Test Plan
setup llama server with 

```
llama stack run ./llama_stack/templates/open-benchmark/run.yaml
```


then send traffic:

```
 llama-stack-client eval run-benchmark "meta-reference-docvqa"  --model-id   meta-llama/Llama-3.3-70B-Instruct     --output-dir /tmp/gpqa    --num-examples   200
```
2025-03-19 14:56:14 -07:00
..
agents feat(agent): support multiple tool groups (#1556) 2025-03-17 22:13:09 -07:00
datasetio fix: Call pandas.read_* in a seperate thread (#1698) 2025-03-19 10:46:37 -07:00
eval feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
inference fix: Updating ToolCall.arguments to allow for json strings that can be decoded on client side (#1685) 2025-03-19 10:36:19 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training chore: fix mypy violations in post_training modules (#1548) 2025-03-18 14:58:16 -07:00
safety feat(agent): support multiple tool groups (#1556) 2025-03-17 22:13:09 -07:00
scoring feat: [new open benchmark] DocVQA (#1647) 2025-03-19 14:56:14 -07:00
telemetry refactor: move all datetime.now() calls to UTC (#1589) 2025-03-13 15:34:53 -07:00
tool_runtime chore: Make code interpreter async (#1654) 2025-03-18 14:13:46 -07:00
vector_io feat: Qdrant inline provider (#1273) 2025-03-18 14:04:21 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00