forked from phoenix-oss/llama-stack-mirror
27 lines
1.3 KiB
Markdown
27 lines
1.3 KiB
Markdown
# Self-Hosted Distributions
|
|
```{toctree}
|
|
:maxdepth: 1
|
|
:hidden:
|
|
|
|
ollama
|
|
tgi
|
|
remote-vllm
|
|
meta-reference-gpu
|
|
meta-reference-quantized-gpu
|
|
together
|
|
fireworks
|
|
bedrock
|
|
```
|
|
|
|
We offer deployable distributions where you can host your own Llama Stack server using local inference.
|
|
|
|
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
|
|
|:----------------: |:------------------------------------------: |:-----------------------: |
|
|
| Ollama | {dockerhub}`distribution-ollama` | [Guide](ollama) |
|
|
| TGI | {dockerhub}`distribution-tgi` | [Guide](tgi) |
|
|
| vLLM | {dockerhub}`distribution-remote-vllm` | [Guide](remote-vllm) |
|
|
| Meta Reference | {dockerhub}`distribution-meta-reference-gpu` | [Guide](meta-reference-gpu) |
|
|
| Meta Reference Quantized | {dockerhub}`distribution-meta-reference-quantized-gpu` | [Guide](meta-reference-quantized-gpu) |
|
|
| Together | {dockerhub}`distribution-together` | [Guide](together) |
|
|
| Fireworks | {dockerhub}`distribution-fireworks` | [Guide](fireworks) |
|
|
| Bedrock | {dockerhub}`distribution-bedrock` | [Guide](bedrock) |
|