llama-stack-mirror/docs/source
Daniele Martinoli fb998683e0
fix: Agent uses the first configured vector_db_id when documents are provided (#1276)
# What does this PR do?
The agent API allows to query multiple DBs using the `vector_db_ids`
argument of the `rag` tool:
```py
        toolgroups=[
            {
                "name": "builtin::rag",
                "args": {"vector_db_ids": [vector_db_id]},
            }
        ],
```
This means that multiple DBs can be used to compose an aggregated
context by executing the query on each of them.

When documents are passed to the next agent turn, there is no explicit
way to configure the vector DB where the embeddings will be ingested. In
such cases, we can assume that:
- if any `vector_db_ids` is given, we use the first one (it probably
makes sense to assume that it's the only one in the list, otherwise we
should loop on all the given DBs to have a consistent ingestion)
- if no `vector_db_ids` is given, we can use the current logic to
generate a default DB using the default provider. If multiple providers
are defined, the API will fail as expected: the user has to provide
details on where to ingest the documents.

(Closes #1270)

## Test Plan
The issue description details how to replicate the problem.

[//]: # (## Documentation)

---------

Signed-off-by: Daniele Martinoli <dmartino@redhat.com>
2025-03-04 21:44:13 -08:00
..
building_applications fix: Agent uses the first configured vector_db_id when documents are provided (#1276) 2025-03-04 21:44:13 -08:00
concepts docs: Add vLLM to the list of inference providers in concepts and providers pages (#1227) 2025-02-23 20:16:30 -08:00
contributing refactor: move tests/client-sdk to tests/api (#1376) 2025-03-03 17:28:12 -08:00
distributions refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
getting_started docs: update user prompt example (#1329) 2025-02-28 16:42:29 -08:00
introduction Update index.md (#888) 2025-01-28 04:55:41 -08:00
playground fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
providers chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
references chore: rename task_config to benchmark_config (#1397) 2025-03-04 12:44:04 -08:00
conf.py fix: update version and fix docs release notes link 2025-03-03 11:48:57 -08:00
index.md fix: update version and fix docs release notes link 2025-03-03 11:48:57 -08:00