llama-stack-mirror/llama_stack/providers/inline
Ben Browning 055885bd5a Add pdf support to file_search for Responses API
This adds basic PDF support (using our existing `parse_pdf` function)
to the file_search tool and corresponding Vector Files API.

When a PDF file is uploaded and attached to a vector store, we parse
the pdf and then chunk its content as normal. This is not the best
solution long-term, but it does match what we've been doing so far for
PDF files in the memory tool.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-13 09:36:55 -04:00
..
agents feat: File search tool for Responses API 2025-06-13 09:36:04 -04:00
datasetio chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
eval feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
files/localfs feat: reference implementation for files API (#2330) 2025-06-02 21:54:24 -07:00
inference feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00
safety feat: add cpu/cuda config for prompt guard (#2194) 2025-05-28 12:23:15 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry revert: "chore: Remove zero-width space characters from OTEL service" (#2331) 2025-06-02 14:21:35 -07:00
tool_runtime feat: File search tool for Responses API 2025-06-13 09:36:04 -04:00
vector_io Add pdf support to file_search for Responses API 2025-06-13 09:36:55 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00