mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 05:13:53 +00:00
docs: Updated documentation and configuration to make things easier for the unfamiliar
Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
This commit is contained in:
parent
9b478f3756
commit
2847216efb
10 changed files with 69 additions and 32 deletions
|
|
@ -1,10 +1,12 @@
|
|||
# Using Llama Stack as a Library
|
||||
|
||||
If you are planning to use an external service for Inference (even Ollama or TGI counts as external), it is often easier to use Llama Stack as a library. This avoids the overhead of setting up a server.
|
||||
## Setup Llama Stack without a Server
|
||||
If you are planning to use an external service for Inference (even Ollama or TGI counts as external), it is often easier to use Llama Stack as a library.
|
||||
This avoids the overhead of setting up a server.
|
||||
```bash
|
||||
# setup
|
||||
uv pip install llama-stack
|
||||
llama stack build --template together --image-type venv
|
||||
llama stack build --template ollama --image-type venv
|
||||
```
|
||||
|
||||
```python
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue