llama-stack-mirror/llama_stack/providers/registry
Ashwin Bharambe 0e96279bee
chore(cleanup)!: remove tool_runtime.rag_tool (#3871)
Kill the `builtin::rag` tool group completely since it is no longer
targeted. We use the Responses implementation for knowledge_search which
uses the `openai_vector_stores` pathway.

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-20 22:26:21 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
agents.py chore!: remove telemetry API usage (#3815) 2025-10-16 10:39:32 -07:00
batches.py chore: remove openai dependency from providers (#3398) 2025-09-11 10:19:59 +02:00
datasetio.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
eval.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
files.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
inference.py chore(cleanup)!: remove tool_runtime.rag_tool (#3871) 2025-10-20 22:26:21 -07:00
post_training.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
safety.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
scoring.py chore: remove openai dependency from providers (#3398) 2025-09-11 10:19:59 +02:00
tool_runtime.py chore(cleanup)!: remove tool_runtime.rag_tool (#3871) 2025-10-20 22:26:21 -07:00
vector_io.py chore(cleanup)!: remove tool_runtime.rag_tool (#3871) 2025-10-20 22:26:21 -07:00