llama-stack-mirror/llama_stack/providers/registry
Ashwin Bharambe c165de409d chore(cleanup)!: remove tool_runtime.rag_tool
Kill the `builtin::rag` tool group completely since it is no longer
targeted. We use the Responses implementation for knowledge_search which
uses the `openai_vector_stores` pathway.
2025-10-20 21:46:16 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
agents.py chore!: remove telemetry API usage (#3815) 2025-10-16 10:39:32 -07:00
batches.py chore: remove openai dependency from providers (#3398) 2025-09-11 10:19:59 +02:00
datasetio.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
eval.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
files.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
inference.py refactor: replace default all-MiniLM-L6-v2 embedding model by nomic-embed-text-v1.5 in Llama Stack (#3183) 2025-10-14 10:44:20 -04:00
post_training.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
safety.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00
scoring.py chore: remove openai dependency from providers (#3398) 2025-09-11 10:19:59 +02:00
tool_runtime.py chore(cleanup)!: remove tool_runtime.rag_tool 2025-10-20 21:46:16 -07:00
vector_io.py chore(cleanup)!: remove tool_runtime.rag_tool 2025-10-20 21:46:16 -07:00