litellm/litellm_server/README.md

1.8 KiB

litellm-server

A simple, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

Usage

docker run -e PORT=8000 -p 8000:8000 ghcr.io/berriai/litellm:latest

# UVICORN: OpenAI Proxy running on http://0.0.0.0:8000

Endpoints:

  • /chat/completions - chat completions endpoint to call 100+ LLMs
  • /router/completions - for multiple deployments of the same model (e.g. Azure OpenAI), uses the least used deployment. Learn more
  • /models - available models on server

Making Requests to Proxy

Curl

Call OpenAI

curl http://0.0.0.0:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Call Bedrock

curl http://0.0.0.0:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "bedrock/anthropic.claude-instant-v1",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Running Locally

$ git clone https://github.com/BerriAI/litellm.git
$ cd ./litellm/openai-proxy
$ uvicorn main:app --host 0.0.0.0 --port 8000

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.