mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-29 15:23:51 +00:00
updates
This commit is contained in:
parent
557e1f9fe7
commit
9ddc28eca7
1 changed files with 11 additions and 0 deletions
|
@ -41,6 +41,17 @@ Both of these provide options to run model inference using our reference impleme
|
|||
|
||||
### Docker
|
||||
|
||||
Running inference of the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options:
|
||||
|
||||
**Do you have access to a machine with powerful GPUs?**
|
||||
If so, we suggest...
|
||||
|
||||
**Are you running on a "regular" desktop machine?**
|
||||
In that case, we suggest ollama
|
||||
|
||||
**Do you have access to a remote inference provider like Fireworks, Togther, etc.?**
|
||||
...
|
||||
|
||||
We provide pre-built Docker image of Llama Stack distribution, which can be found in the following links in the [distributions](../distributions/) folder.
|
||||
|
||||
> [!NOTE]
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue