forked from phoenix-oss/llama-stack-mirror
Embedding models are tiny and can be pulled on-demand. Let's do that so the user doesn't have to do "yet another thing" to get themselves set up. Thanks @hardikjshah for the suggestion. Also fixed a build dependency miss (TODO: distro_codegen needs to actually check that the build template contains all providers mentioned for the run.yaml file) ## Test Plan First run `ollama rm all-minilm:latest`. Run `llama stack build --template ollama && llama stack run ollama --env INFERENCE_MODEL=llama3.2:3b-instruct-fp16`. See that it outputs a "Pulling embedding model `all-minilm:latest`" output and the stack starts up correctly. Verify that `ollama list` shows the model is correctly downloaded. |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
databricks | ||
fireworks | ||
groq | ||
nvidia | ||
ollama | ||
passthrough | ||
runpod | ||
sambanova | ||
sample | ||
tgi | ||
together | ||
vllm | ||
__init__.py |