llama-stack/docs/source/distributions/index.md
Riandy e4a2948684
Update android_sdk.md (#578)
Fix images URL and replacing todo. Previous commit missed that

# What does this PR do?

In short, provide a summary of what this PR does and why. Usually, the
relevant context should be present in a linked issue.

- [ ] Addresses issue (#issue)


## Test Plan

Please describe:
 - tests you ran to verify your changes with result summaries.
 - provide instructions so it can be reproduced.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-06 12:53:28 -08:00

1.9 KiB

Starting a Llama Stack

:maxdepth: 3
:hidden:

importing_as_library
building_distro
configuration

You can instantiate a Llama Stack in one of the following ways:

  • As a Library: this is the simplest, especially if you are using an external inference service. See Using Llama Stack as a Library
  • Docker: we provide a number of pre-built Docker containers so you can start a Llama Stack server instantly. You can also build your own custom Docker container.
  • Conda: finally, you can build a custom Llama Stack server using llama stack build containing the exact combination of providers you wish. We have provided various templates to make getting started easier.

Which templates / distributions to choose depends on the hardware you have for running LLM inference.

  • Do you have access to a machine with powerful GPUs? If so, we suggest:

    • {dockerhub}distribution-remote-vllm (Guide)
    • {dockerhub}distribution-meta-reference-gpu (Guide)
    • {dockerhub}distribution-tgi (Guide)
  • Are you running on a "regular" desktop machine? If so, we suggest:

    • {dockerhub}distribution-ollama (Guide)
  • Do you have an API key for a remote inference provider like Fireworks, Together, etc.? If so, we suggest:

    • {dockerhub}distribution-together (Guide)
    • {dockerhub}distribution-fireworks (Guide)
  • Do you want to run Llama Stack inference on your iOS / Android device If so, we suggest:

You can also build your own custom distribution.