forked from phoenix-oss/llama-stack-mirror
# What does this PR do? - Fix issue w/ passthrough provider [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan llama stack run [//]: # (## Documentation)
1.3 KiB
1.3 KiB
| orphan |
|---|
| true |
Passthrough Distribution
:maxdepth: 2
:hidden:
self
The llamastack/distribution-passthrough distribution consists of the following provider configurations.
| API | Provider(s) |
|---|---|
| agents | inline::meta-reference |
| datasetio | remote::huggingface, inline::localfs |
| eval | inline::meta-reference |
| inference | remote::passthrough, inline::sentence-transformers |
| safety | inline::llama-guard |
| scoring | inline::basic, inline::llm-as-judge, inline::braintrust |
| telemetry | inline::meta-reference |
| tool_runtime | remote::brave-search, remote::tavily-search, remote::wolfram-alpha, inline::code-interpreter, inline::rag-runtime, remote::model-context-protocol |
| vector_io | inline::faiss, remote::chromadb, remote::pgvector |
Environment Variables
The following environment variables can be configured:
LLAMA_STACK_PORT: Port for the Llama Stack distribution server (default:5001)PASSTHROUGH_API_KEY: Passthrough API Key (default: ``)PASSTHROUGH_URL: Passthrough URL (default: ``)
Models
The following models are available by default:
llama3.1-8b-instructllama3.2-11b-vision-instruct