litellm/docs/my-website/docs/providers/triton-inference-server.md
2024-05-10 19:08:15 -07:00

1.9 KiB

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

Triton Inference Server

LiteLLM supports Embedding Models on Triton Inference Servers

Usage

Example Call

Use the triton/ prefix to route to triton server

from litellm import embedding
import os

response = await litellm.aembedding(
    model="triton/<your-triton-model>",                                                       
    api_base="https://your-triton-api-base/triton/embeddings", # /embeddings endpoint you want litellm to call on your server
    input=["good morning from litellm"],
)
  1. Add models to your config.yaml
model_list:
  - model_name: my-triton-model
    litellm_params:
      model: triton/<your-triton-model>"
      api_base: https://your-triton-api-base/triton/embeddings
  1. Start the proxy
$ litellm --config /path/to/config.yaml --detailed_debug
  1. Send Request to LiteLLM Proxy Server
```python
import openai
from openai import OpenAI

# set base_url to your proxy server
# set api_key to send to proxy server
client = OpenAI(api_key="<proxy-api-key>", base_url="http://0.0.0.0:4000")

response = client.embeddings.create(
    input=["hello from litellm"],
    model="my-triton-model"
)

print(response)

```

--header is optional, only required if you're using litellm proxy with Virtual Keys

```shell
curl --location 'http://0.0.0.0:4000/embeddings' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-1234' \
--data ' {
"model": "my-triton-model",
"input": ["write a litellm poem"]
}'

```