llama-stack-mirror/llama_stack/providers/inline
Ben Browning 8a5ea57253 Responses file_search wire up additional params
This adds passing of max_num_results from the file_search tool call
into the knowledge_search tool, as well as logs warnings if the
filters or ranking_options params are used since those are not wired
up yet. And, it adds the API surface for filters and ranking options
so we don't have to generate clients again as we add that.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-13 09:36:55 -04:00
..
agents Responses file_search wire up additional params 2025-06-13 09:36:55 -04:00
datasetio chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
eval feat: implementation for agent/session list and describe (#1606) 2025-05-07 14:49:23 +02:00
files/localfs feat: reference implementation for files API (#2330) 2025-06-02 21:54:24 -07:00
inference feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training feat: add huggingface post_training impl (#2132) 2025-05-16 14:41:28 -07:00
safety feat: add cpu/cuda config for prompt guard (#2194) 2025-05-28 12:23:15 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry revert: "chore: Remove zero-width space characters from OTEL service" (#2331) 2025-06-02 14:21:35 -07:00
tool_runtime feat: File search tool for Responses API 2025-06-13 09:36:04 -04:00
vector_io Move vector file attach code to OpenAIVectorStoreMixin 2025-06-13 09:36:55 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00