llama-stack-mirror/docs/source/distributions
Ashwin Bharambe 11697f85c5
fix: pull ollama embedding model if necessary (#1209)
Embedding models are tiny and can be pulled on-demand. Let's do that so
the user doesn't have to do "yet another thing" to get themselves set
up.

Thanks @hardikjshah for the suggestion.

Also fixed a build dependency miss (TODO: distro_codegen needs to
actually check that the build template contains all providers mentioned
for the run.yaml file)

## Test Plan 

First run `ollama rm all-minilm:latest`. 

Run `llama stack build --template ollama && llama stack run ollama --env
INFERENCE_MODEL=llama3.2:3b-instruct-fp16`. See that it outputs a
"Pulling embedding model `all-minilm:latest`" output and the stack
starts up correctly. Verify that `ollama list` shows the model is
correctly downloaded.
2025-02-21 10:35:56 -08:00
..
ondevice_distro Fixed distro documentation (#852) 2025-01-23 08:19:51 -08:00
remote_hosted_distro feat(providers): add NVIDIA Inference embedding provider and tests (#935) 2025-02-20 16:59:48 -08:00
self_hosted_distro fix: pull ollama embedding model if necessary (#1209) 2025-02-21 10:35:56 -08:00
building_distro.md chore: remove --no-list-templates option (#1121) 2025-02-18 10:13:46 -08:00
configuration.md script for running client sdk tests (#895) 2025-02-19 22:38:06 -08:00
importing_as_library.md Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
index.md Add Kubernetes deployment guide (#899) 2025-02-06 10:28:02 -08:00
kubernetes_deployment.md Add Kubernetes deployment guide (#899) 2025-02-06 10:28:02 -08:00
selection.md docs: miscellaneous small fixes (#961) 2025-02-04 15:31:30 -08:00