forked from phoenix-oss/llama-stack-mirror
fix: Adding Embedding model to watsonx inference (#2118)
# What does this PR do? Issue Link : https://github.com/meta-llama/llama-stack/issues/2117 ## Test Plan Once added, User will be able to use Sentence Transformer model `all-MiniLM-L6-v2`
This commit is contained in:
parent
136e6b3cf7
commit
c985ea6326
5 changed files with 36 additions and 6 deletions
|
@ -4,6 +4,7 @@ distribution_spec:
|
|||
providers:
|
||||
inference:
|
||||
- remote::watsonx
|
||||
- inline::sentence-transformers
|
||||
vector_io:
|
||||
- inline::faiss
|
||||
safety:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue