mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-20 23:12:25 +00:00
Nutanix AI on!
This commit is contained in:
parent
1e2faa461f
commit
64c5d38ae9
10 changed files with 234 additions and 2 deletions
40
distributions/nutanix/README.md
Normal file
40
distributions/nutanix/README.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
# Nutanix Distribution
|
||||
|
||||
The `llamastack/distribution-nutanix` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::nutanix | meta-reference | meta-reference | meta-reference | meta-reference |
|
||||
|
||||
|
||||
### Start the Distribution (Hosted remote)
|
||||
|
||||
> [!NOTE]
|
||||
> This assumes you have an hosted Nutanix AI endpoint and an API Key.
|
||||
|
||||
1. Clone the repo
|
||||
```
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
cd llama-stack
|
||||
```
|
||||
|
||||
2. Config the model name
|
||||
|
||||
Please adjust the `NUTANIX_SUPPORTED_MODELS` variable at line 29 in `llama_stack/providers/adapters/inference/nutanix/nutanix.py` according to your deployment.
|
||||
|
||||
3. Build the distrbution
|
||||
```
|
||||
pip install -e .
|
||||
llama stack build --template nutanix --name ntnx --image-type conda
|
||||
```
|
||||
|
||||
4. Set the endpoint URL and API Key
|
||||
```
|
||||
llama stack configure ntnx
|
||||
```
|
||||
|
||||
5. Serve and enjoy!
|
||||
```
|
||||
llama stack run ntnx --port 174
|
||||
```
|
||||
1
distributions/nutanix/build.yaml
Normal file
1
distributions/nutanix/build.yaml
Normal file
|
|
@ -0,0 +1 @@
|
|||
../../llama_stack/templates/nutanix/build.yaml
|
||||
Loading…
Add table
Add a link
Reference in a new issue