llama-stack-mirror/llama_stack/templates
Divya c985ea6326
fix: Adding Embedding model to watsonx inference (#2118)
# What does this PR do?
Issue Link : https://github.com/meta-llama/llama-stack/issues/2117

## Test Plan
Once added, User will be able to use Sentence Transformer model
`all-MiniLM-L6-v2`
2025-05-12 10:58:22 -07:00
..
bedrock chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
cerebras chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
ci-tests chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
dell chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
dev chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
experimental-post-training fix: fix experimental-post-training template (#1740) 2025-03-20 23:07:19 -07:00
fireworks chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
groq chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
hf-endpoint chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
hf-serverless chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
llama_api chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
meta-reference-gpu chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
nvidia chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
ollama chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
open-benchmark chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
passthrough chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
remote-vllm chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
sambanova chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
tgi chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
together fix: revert "feat(provider): adding llama4 support in together inference provider (#2123)" (#2124) 2025-05-08 15:18:16 -07:00
verification fix: revert "feat(provider): adding llama4 support in together inference provider (#2123)" (#2124) 2025-05-08 15:18:16 -07:00
vllm-gpu chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
watsonx fix: Adding Embedding model to watsonx inference (#2118) 2025-05-12 10:58:22 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
dependencies.json fix: Adding Embedding model to watsonx inference (#2118) 2025-05-12 10:58:22 -07:00
template.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00