forked from phoenix-oss/llama-stack-mirror
fix: Adding Embedding model to watsonx inference (#2118)
# What does this PR do? Issue Link : https://github.com/meta-llama/llama-stack/issues/2117 ## Test Plan Once added, User will be able to use Sentence Transformer model `all-MiniLM-L6-v2`
This commit is contained in:
parent
136e6b3cf7
commit
c985ea6326
5 changed files with 36 additions and 6 deletions
|
@ -833,6 +833,8 @@
|
|||
"tqdm",
|
||||
"transformers",
|
||||
"tree_sitter",
|
||||
"uvicorn"
|
||||
"uvicorn",
|
||||
"sentence-transformers --no-deps",
|
||||
"torch torchvision --index-url https://download.pytorch.org/whl/cpu"
|
||||
]
|
||||
}
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue