litellm/openai-proxy
2023-10-21 18:28:37 -07:00
..
tests (fix) rename proxy to openai-proxy 2023-10-21 15:59:35 -07:00
Dockerfile (fix) proxy server docker file 2023-10-21 16:24:35 -07:00
main.py (feat) openai-proxy add swagger docs 2023-10-21 17:48:36 -07:00
README.md docs(openai-proxy-docs): cleanupo 2023-10-21 18:28:37 -07:00
requirements.txt (fix) proxy server docker file 2023-10-21 16:24:35 -07:00

openai-proxy

A simple, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ git clone https://github.com/BerriAI/litellm.git
$ cd ./litellm/openai-proxy
$ uvicorn main:app --host 0.0.0.0 --port 8000

replace openai base

import openai 
openai.api_base = "http://0.0.0.0:8000"

# call cohere
openai.api_key = "my-cohere-key" # this gets passed as a header 

response = openai.ChatCompletion.create(model="command-nightly", messages=[{"role":"user", "content":"Hey!"}])

# call bedrock 
response = openai.ChatCompletion.create(
    model = "bedrock/anthropic.claude-instant-v1",
    messages = [
        {
            "role": "user",
            "content": "Hey!"
        }
    ],
    aws_access_key_id="",
    aws_secret_access_key="",
    aws_region_name="us-west-2",
)

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.