llama-stack/docs/source/distributions/self_hosted_distro/index.md
2024-11-22 20:42:17 -08:00

27 lines
1.3 KiB
Markdown

# Self-Hosted Distributions
```{toctree}
:maxdepth: 1
:hidden:
ollama
tgi
remote-vllm
meta-reference-gpu
meta-reference-quantized-gpu
together
fireworks
bedrock
```
We offer deployable distributions where you can host your own Llama Stack server using local inference.
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
|:----------------: |:------------------------------------------: |:-----------------------: |
| Ollama | {dockerhub}`distribution-ollama` | [Guide](ollama) |
| TGI | {dockerhub}`distribution-tgi` | [Guide](tgi) |
| vLLM | {dockerhub}`distribution-remote-vllm` | [Guide](remote-vllm) |
| Meta Reference | {dockerhub}`distribution-meta-reference-gpu` | [Guide](meta-reference-gpu) |
| Meta Reference Quantized | {dockerhub}`distribution-meta-reference-quantized-gpu` | [Guide](meta-reference-quantized-gpu) |
| Together | {dockerhub}`distribution-together` | [Guide](together) |
| Fireworks | {dockerhub}`distribution-fireworks` | [Guide](fireworks) |
| Bedrock | {dockerhub}`distribution-bedrock` | [Guide](bedrock) |