llama-stack/llama_stack/providers
Ashwin Bharambe 11697f85c5
fix: pull ollama embedding model if necessary (#1209)
Embedding models are tiny and can be pulled on-demand. Let's do that so
the user doesn't have to do "yet another thing" to get themselves set
up.

Thanks @hardikjshah for the suggestion.

Also fixed a build dependency miss (TODO: distro_codegen needs to
actually check that the build template contains all providers mentioned
for the run.yaml file)

## Test Plan 

First run `ollama rm all-minilm:latest`. 

Run `llama stack build --template ollama && llama stack run ollama --env
INFERENCE_MODEL=llama3.2:3b-instruct-fp16`. See that it outputs a
"Pulling embedding model `all-minilm:latest`" output and the stack
starts up correctly. Verify that `ollama list` shows the model is
correctly downloaded.
2025-02-21 10:35:56 -08:00
..
inline feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
registry feat: inference passthrough provider (#1166) 2025-02-19 21:47:00 -08:00
remote fix: pull ollama embedding model if necessary (#1209) 2025-02-21 10:35:56 -08:00
tests fix: pass tool_prompt_format to chat_formatter (#1198) 2025-02-20 21:38:35 -08:00
utils feat(api): Add options for supporting various embedding models (#1192) 2025-02-20 22:27:12 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00