mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-13 00:26:10 +00:00
More simplification of the "Starting a Llama Stack" doc
This commit is contained in:
parent
76fc5d9f31
commit
1e6006c599
4 changed files with 17 additions and 54 deletions
|
@ -1,27 +0,0 @@
|
|||
# Self-Hosted Distributions
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
ollama
|
||||
tgi
|
||||
remote-vllm
|
||||
meta-reference-gpu
|
||||
meta-reference-quantized-gpu
|
||||
together
|
||||
fireworks
|
||||
bedrock
|
||||
```
|
||||
|
||||
We offer deployable distributions where you can host your own Llama Stack server using local inference.
|
||||
|
||||
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
|
||||
|:----------------: |:------------------------------------------: |:-----------------------: |
|
||||
| Ollama | {dockerhub}`distribution-ollama` | [Guide](ollama) |
|
||||
| TGI | {dockerhub}`distribution-tgi` | [Guide](tgi) |
|
||||
| vLLM | {dockerhub}`distribution-remote-vllm` | [Guide](remote-vllm) |
|
||||
| Meta Reference | {dockerhub}`distribution-meta-reference-gpu` | [Guide](meta-reference-gpu) |
|
||||
| Meta Reference Quantized | {dockerhub}`distribution-meta-reference-quantized-gpu` | [Guide](meta-reference-quantized-gpu) |
|
||||
| Together | {dockerhub}`distribution-together` | [Guide](together) |
|
||||
| Fireworks | {dockerhub}`distribution-fireworks` | [Guide](fireworks) |
|
||||
| Bedrock | {dockerhub}`distribution-bedrock` | [Guide](bedrock) |
|
Loading…
Add table
Add a link
Reference in a new issue