mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
add nvidia nim inference provider to docs (#534)
# What does this PR do? add [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) reference to the docs ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Ran pre-commit to handle lint / formatting issues. - [x] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [x] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
This commit is contained in:
parent
e2054d53e4
commit
e0d5be41fe
3 changed files with 3 additions and 1 deletions
|
@ -86,6 +86,7 @@ Additionally, we have designed every element of the Stack such that APIs as well
|
||||||
| Together | Hosted | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | |
|
| Together | Hosted | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | |
|
||||||
| Ollama | Single Node | | :heavy_check_mark: | | |
|
| Ollama | Single Node | | :heavy_check_mark: | | |
|
||||||
| TGI | Hosted and Single Node | | :heavy_check_mark: | | |
|
| TGI | Hosted and Single Node | | :heavy_check_mark: | | |
|
||||||
|
| [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) | Hosted and Single Node | | :heavy_check_mark: | | |
|
||||||
| Chroma | Single Node | | | :heavy_check_mark: | | |
|
| Chroma | Single Node | | | :heavy_check_mark: | | |
|
||||||
| PG Vector | Single Node | | | :heavy_check_mark: | | |
|
| PG Vector | Single Node | | | :heavy_check_mark: | | |
|
||||||
| PyTorch ExecuTorch | On-device iOS | :heavy_check_mark: | :heavy_check_mark: | | |
|
| PyTorch ExecuTorch | On-device iOS | :heavy_check_mark: | :heavy_check_mark: | | |
|
||||||
|
|
|
@ -58,7 +58,7 @@ While there is a lot of flexibility to mix-and-match providers, often users will
|
||||||
|
|
||||||
**Remotely Hosted Distro**: These are the simplest to consume from a user perspective. You can simply obtain the API key for these providers, point to a URL and have _all_ Llama Stack APIs working out of the box. Currently, [Fireworks](https://fireworks.ai/) and [Together](https://together.xyz/) provide such easy-to-consume Llama Stack distributions.
|
**Remotely Hosted Distro**: These are the simplest to consume from a user perspective. You can simply obtain the API key for these providers, point to a URL and have _all_ Llama Stack APIs working out of the box. Currently, [Fireworks](https://fireworks.ai/) and [Together](https://together.xyz/) provide such easy-to-consume Llama Stack distributions.
|
||||||
|
|
||||||
**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Cerebras, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.
|
**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Cerebras, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) or [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.
|
||||||
|
|
||||||
|
|
||||||
**On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.)
|
**On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.)
|
||||||
|
|
|
@ -44,6 +44,7 @@ A number of "adapters" are available for some popular Inference and Memory (Vect
|
||||||
| Together | Hosted | Y | Y | | Y | |
|
| Together | Hosted | Y | Y | | Y | |
|
||||||
| Ollama | Single Node | | Y | | |
|
| Ollama | Single Node | | Y | | |
|
||||||
| TGI | Hosted and Single Node | | Y | | |
|
| TGI | Hosted and Single Node | | Y | | |
|
||||||
|
| [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) | Hosted and Single Node | | Y | | |
|
||||||
| Chroma | Single Node | | | Y | | |
|
| Chroma | Single Node | | | Y | | |
|
||||||
| Postgres | Single Node | | | Y | | |
|
| Postgres | Single Node | | | Y | | |
|
||||||
| PyTorch ExecuTorch | On-device iOS | Y | Y | | |
|
| PyTorch ExecuTorch | On-device iOS | Y | Y | | |
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue