mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-17 12:02:36 +00:00
| .. | ||
| build.yaml | ||
| README.md | ||
Nutanix Distribution
The llamastack/distribution-nutanix distribution consists of the following provider configurations.
| API | Inference | Agents | Memory | Safety | Telemetry |
|---|---|---|---|---|---|
| Provider(s) | remote::nutanix | meta-reference | meta-reference | meta-reference | meta-reference |
Start the Distribution (Hosted remote)
Note
This assumes you have an hosted Nutanix AI endpoint and an API Key.
- Clone the repo
git clone git@github.com:meta-llama/llama-stack.git
cd llama-stack
- Config the model name
Please adjust the NUTANIX_SUPPORTED_MODELS variable at line 29 in llama_stack/providers/adapters/inference/nutanix/nutanix.py according to your deployment.
- Build the distrbution
pip install -e .
llama stack build --template nutanix --name ntnx --image-type conda
- Set the endpoint URL and API Key
llama stack configure ntnx
- Serve and enjoy!
llama stack run ntnx --port 174