forked from phoenix-oss/llama-stack-mirror
# What does this PR do? IBM watsonx ai added as the inference [#1741 ](https://github.com/meta-llama/llama-stack/issues/1741) [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) --------- Co-authored-by: Sajikumar JS <sajikumar.js@ibm.com>
30 lines
665 B
YAML
30 lines
665 B
YAML
version: '2'
|
|
distribution_spec:
|
|
description: Use watsonx for running LLM inference
|
|
providers:
|
|
inference:
|
|
- remote::watsonx
|
|
vector_io:
|
|
- inline::faiss
|
|
safety:
|
|
- inline::llama-guard
|
|
agents:
|
|
- inline::meta-reference
|
|
telemetry:
|
|
- inline::meta-reference
|
|
eval:
|
|
- inline::meta-reference
|
|
datasetio:
|
|
- remote::huggingface
|
|
- inline::localfs
|
|
scoring:
|
|
- inline::basic
|
|
- inline::llm-as-judge
|
|
- inline::braintrust
|
|
tool_runtime:
|
|
- remote::brave-search
|
|
- remote::tavily-search
|
|
- inline::code-interpreter
|
|
- inline::rag-runtime
|
|
- remote::model-context-protocol
|
|
image_type: conda
|