llama-stack/llama_stack
Ashwin Bharambe 11697f85c5
fix: pull ollama embedding model if necessary (#1209)
Embedding models are tiny and can be pulled on-demand. Let's do that so
the user doesn't have to do "yet another thing" to get themselves set
up.

Thanks @hardikjshah for the suggestion.

Also fixed a build dependency miss (TODO: distro_codegen needs to
actually check that the build template contains all providers mentioned
for the run.yaml file)

## Test Plan 

First run `ollama rm all-minilm:latest`. 

Run `llama stack build --template ollama && llama stack run ollama --env
INFERENCE_MODEL=llama3.2:3b-instruct-fp16`. See that it outputs a
"Pulling embedding model `all-minilm:latest`" output and the stack
starts up correctly. Verify that `ollama list` shows the model is
correctly downloaded.
2025-02-21 10:35:56 -08:00
..
apis feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
cli fix: convert back to model descriptor for model in list --downloaded (#1201) 2025-02-21 08:10:34 -08:00
distribution fix: Updating images so that they are able to run without root access (#1208) 2025-02-21 11:32:56 -05:00
models/llama chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
providers fix: pull ollama embedding model if necessary (#1209) 2025-02-21 10:35:56 -08:00
scripts precommit again 2025-02-19 22:40:45 -08:00
strong_typing Ensure that deprecations for fields follow through to OpenAPI 2025-02-19 13:54:04 -08:00
templates fix: pull ollama embedding model if necessary (#1209) 2025-02-21 10:35:56 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
schema_utils.py feat: adding endpoints for files and uploads (#1070) 2025-02-20 13:09:00 -08:00