litellm/openai-proxy
2023-10-23 14:52:20 -07:00
..
tests (fix) rename proxy to openai-proxy 2023-10-21 15:59:35 -07:00
.env.template (docs) add .env.template 2023-10-23 11:19:43 -07:00
Dockerfile (fix) proxy server docker file 2023-10-21 16:24:35 -07:00
main.py (fix) proxy server use set_callbacks & load dotenv 2023-10-23 14:52:20 -07:00
README.md Update README.md 2023-10-23 12:42:24 -07:00
requirements.txt (fix) proxy server use set_callbacks & load dotenv 2023-10-23 14:52:20 -07:00
utils.py (fix) proxy server use set_callbacks & load dotenv 2023-10-23 14:52:20 -07:00

Openai-proxy

A simple, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

Usage

$ git clone https://github.com/BerriAI/litellm.git
$ cd ./litellm/openai-proxy
$ uvicorn main:app --host 0.0.0.0 --port 8000

Endpoints:

  • /chat/completions - chat completions endpoint to call 100+ LLMs
  • /models - available models on server

Making Requests to Proxy

Curl

curl http://0.0.0.0:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Replace openai base

import openai 
openai.api_base = "http://0.0.0.0:8000"

# cohere call
response = openai.ChatCompletion.create(
        model="command-nightly",
        messages=[{"role":"user", "content":"Say this is a test!"}],
        api_key = "your-cohere-api-key"
)

# bedrock call
response = openai.ChatCompletion.create(
        model = "bedrock/anthropic.claude-instant-v1",
        messages=[{"role":"user", "content":"Say this is a test!"}],
        aws_access_key_id="",
        aws_secret_access_key="",
        aws_region_name="us-west-2",
)

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.