llama-stack-mirror/docs
Daniele Martinoli cca9bd6cc3
feat: Qdrant inline provider (#1273)
# What does this PR do?
Removed local execution option from the remote Qdrant provider and
introduced an explicit inline provider for the embedded execution.
Updated the ollama template to include this option: this part can be
reverted in case we don't want to have two default `vector_io`
providers.

(Closes #1082)

## Test Plan
Build and run an ollama distro:
```bash
llama stack build --template ollama --image-type conda
llama stack run --image-type conda ollama
```

Run one of the sample ingestionapplicatinos like
[rag_with_vector_db.py](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/rag_with_vector_db.py),
but replace this line:
```py
    selected_vector_provider = vector_providers[0]
```
with the following, to use the `qdrant` provider:
```py
    selected_vector_provider = vector_providers[1]
```

After running the test code, verify the timestamp of the Qdrant store:
```bash
% ls -ltr ~/.llama/distributions/ollama/qdrant.db/collection/test_vector_db_*
total 784
-rw-r--r--@ 1 dmartino  staff  401408 Feb 26 10:07 storage.sqlite
```

[//]: # (## Documentation)

---------

Signed-off-by: Daniele Martinoli <dmartino@redhat.com>
Co-authored-by: Francisco Arceo <farceo@redhat.com>
2025-03-18 14:04:21 -07:00
..
_static feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
notebooks feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
openapi_generator feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source feat: Qdrant inline provider (#1273) 2025-03-18 14:04:21 -07:00
zero_to_hero_guide docs: update ollama doc url (#1508) 2025-03-10 13:04:59 -07:00
conftest.py No spaces in ipynb tests 2025-02-07 11:56:22 -08:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb fix: update getting_started structured decoding cell (#1523) 2025-03-10 13:03:57 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
readme.md Fix README.md notebook links (#976) 2025-02-05 14:33:46 -08:00
requirements.txt fix: add tomli to requirements.txt for docs; ideally we need to move this to uv 2025-03-03 11:11:17 -08:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: