llama-stack-mirror/llama_stack/providers/inline
Ben Browning 788d34d8b4 Move vector file attach code to OpenAIVectorStoreMixin
This moves the vector store file attach code to the
OpenAIVectorStoreMixin. It also centralizes the mime type and pdf
parsing logic into the existing functions in vector_store.py by making
a small refactor there to allow us to use the same code path.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-13 09:36:55 -04:00
..
agents feat: File search tool for Responses API 2025-06-13 09:36:04 -04:00
datasetio chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
eval feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
files/localfs feat: reference implementation for files API (#2330) 2025-06-02 21:54:24 -07:00
inference feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00
safety feat: add cpu/cuda config for prompt guard (#2194) 2025-05-28 12:23:15 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry revert: "chore: Remove zero-width space characters from OTEL service" (#2331) 2025-06-02 14:21:35 -07:00
tool_runtime feat: File search tool for Responses API 2025-06-13 09:36:04 -04:00
vector_io Move vector file attach code to OpenAIVectorStoreMixin 2025-06-13 09:36:55 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00