mirror of
https://github.com/meta-llama/llama-stack.git
synced 2026-01-04 16:22:16 +00:00
add mcp runtime as default to all providers
This commit is contained in:
parent
9d005154d7
commit
acaa92fa24
49 changed files with 264 additions and 177 deletions
|
|
@ -21,7 +21,7 @@ The `llamastack/distribution-remote-vllm` distribution consists of the following
|
|||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::code-interpreter`, `inline::memory-runtime` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::code-interpreter`, `inline::memory-runtime`, `remote::model-context-protocol` |
|
||||
|
||||
|
||||
You can use this distribution if you have GPUs and want to run an independent vLLM server container for running inference.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue