llama-stack/docs/source/distributions/index.md
cdgamarose-nv b3202bcf77
add nvidia distribution (#565)
# What does this PR do?

adds nvidia template for creating a distribution using inference adapter
for NVIDIA NIMs.

## Test Plan

Please describe:
Build llama stack distribution for nvidia using the template, docker and
conda.
```bash
(.venv) local-cdgamarose@a4u8g-0006:~/llama-stack$ llama-stack-client configure --endpoint http://localhost:5000
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000
(.venv) local-cdgamarose@a4u8g-0006:~/llama-stack$ llama-stack-client models list
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
┃ identifier                       ┃ provider_id ┃ provider_resource_id       ┃ metadata ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
│ Llama3.1-8B-Instruct             │ nvidia      │ meta/llama-3.1-8b-instruct │ {}       │
│ meta-llama/Llama-3.2-3B-Instruct │ nvidia      │ meta/llama-3.2-3b-instruct │ {}       │
└──────────────────────────────────┴─────────────┴────────────────────────────┴──────────┘
(.venv) local-cdgamarose@a4u8g-0006:~/llama-stack$ llama-stack-client inference chat-completion --message "hello, write me a 2 sentence poem"
ChatCompletionResponse(
    completion_message=CompletionMessage(
        content='Here is a 2 sentence poem:\n\nThe sun sets slow and paints the sky, \nA gentle hue of pink that makes me sigh.',
        role='assistant',
        stop_reason='end_of_turn',
        tool_calls=[]
    ),
    logprobs=None
)
```

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [x] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.

---------

Co-authored-by: Matthew Farrellee <matt@cs.wisc.edu>
2025-01-15 14:04:43 -08:00

2 KiB

Starting a Llama Stack

:maxdepth: 3
:hidden:

importing_as_library
building_distro
configuration

You can instantiate a Llama Stack in one of the following ways:

  • As a Library: this is the simplest, especially if you are using an external inference service. See Using Llama Stack as a Library
  • Docker: we provide a number of pre-built Docker containers so you can start a Llama Stack server instantly. You can also build your own custom Docker container.
  • Conda: finally, you can build a custom Llama Stack server using llama stack build containing the exact combination of providers you wish. We have provided various templates to make getting started easier.

Which templates / distributions to choose depends on the hardware you have for running LLM inference.

  • Do you have access to a machine with powerful GPUs? If so, we suggest:

    • {dockerhub}distribution-remote-vllm (Guide)
    • {dockerhub}distribution-meta-reference-gpu (Guide)
    • {dockerhub}distribution-tgi (Guide)
    • {dockerhub} distribution-nvidia (Guide)
  • Are you running on a "regular" desktop machine? If so, we suggest:

    • {dockerhub}distribution-ollama (Guide)
  • Do you have an API key for a remote inference provider like Fireworks, Together, etc.? If so, we suggest:

    • {dockerhub}distribution-together (Guide)
    • {dockerhub}distribution-fireworks (Guide)
  • Do you want to run Llama Stack inference on your iOS / Android device If so, we suggest:

  • Do you want a hosted Llama Stack endpoint? If so, we suggest:

You can also build your own custom distribution.