mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
1.9 KiB
1.9 KiB
Starting a Llama Stack
:maxdepth: 3
:hidden:
importing_as_library
building_distro
configuration
You can instantiate a Llama Stack in one of the following ways:
- As a Library: this is the simplest, especially if you are using an external inference service. See Using Llama Stack as a Library
- Docker: we provide a number of pre-built Docker containers so you can start a Llama Stack server instantly. You can also build your own custom Docker container.
- Conda: finally, you can build a custom Llama Stack server using
llama stack build
containing the exact combination of providers you wish. We have provided various templates to make getting started easier.
Which templates / distributions to choose depends on the hardware you have for running LLM inference.
-
Do you have access to a machine with powerful GPUs? If so, we suggest:
-
Are you running on a "regular" desktop machine? If so, we suggest:
- {dockerhub}
distribution-ollama
(Guide)
- {dockerhub}
-
Do you have an API key for a remote inference provider like Fireworks, Together, etc.? If so, we suggest:
-
Do you want to run Llama Stack inference on your iOS / Android device If so, we suggest:
- iOS SDK
- Android (coming soon)
You can also build your own custom distribution.