forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Adds the sentence transformer provider and the `all-MiniLM-L6-v2` embedding model to the default models to register in the run.yaml for all providers. ## Test Plan llama stack build --template together --image-type conda llama stack run ~/.llama/distributions/llamastack-together/together-run.yaml |
||
|---|---|---|
| .. | ||
| bedrock | ||
| cerebras | ||
| dell-tgi | ||
| fireworks | ||
| meta-reference-gpu | ||
| meta-reference-quantized-gpu | ||
| ollama | ||
| remote-vllm | ||
| tgi | ||
| together | ||
| vllm-gpu | ||
| dependencies.json | ||