LiteLLM fork
Find a file
2023-11-14 22:20:17 -08:00
.circleci ci(requirements.txt): enable openai 2023-11-11 19:14:22 -08:00
.github Add files via upload 2023-10-25 16:33:53 -07:00
cookbook (docs) cook book autoevals 2023-11-13 20:35:23 -08:00
dist fix(caching.py): dump model response object as json 2023-11-13 10:41:04 -08:00
docs/my-website (docs) config for azure ad 2023-11-14 15:47:37 -08:00
litellm fix(utils.py): await async function in client wrapper 2023-11-14 22:07:28 -08:00
litellm_server (fix) proxy + docs: use openai.chat.completions.create instead of openai.ChatCompletions 2023-11-13 08:24:26 -08:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.flake8 refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions 2023-11-04 12:50:15 -07:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore feat(completion()): enable setting prompt templates via completion() 2023-11-02 16:24:01 -07:00
.pre-commit-config.yaml refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions 2023-11-04 12:50:15 -07:00
Dockerfile build(litellm_server/main.py): accept config file in /chat/completions 2023-10-27 10:46:32 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
model_prices_and_context_window.json Update Together prices 2023-11-14 18:50:17 +01:00
poetry.lock build(deps): bump urllib3 from 2.0.5 to 2.0.7 2023-11-13 16:11:41 +00:00
pyproject.toml bump: version 1.0.2 → 1.0.3.dev1 2023-11-14 22:20:17 -08:00
README.md Update README.md 2023-11-14 12:06:50 -08:00
render.yaml update render.yaml 2023-10-21 17:43:06 -07:00
requirements.txt build(openai_proxy): docker build fixes 2023-10-25 13:34:04 -07:00
router_config_template.yaml build(litellm_server): add support for global settings 2023-10-27 16:24:54 -07:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

Evaluate LLMs → OpenAI-Compatible Server

PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types.

10/05/2023: LiteLLM is adopting Semantic Versioning for all commits. Learn more
10/16/2023: Self-hosted OpenAI-proxy server Learn more

Usage (Docs)

Important

LiteLLM v1.0.0 is now requires openai>=1.0.0. Migration guide here

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack

from litellm import completion

## set env variables for logging tools
os.environ["PROMPTLAYER_API_KEY"] = "your-promptlayer-key"
os.environ["LLMONITOR_APP_ID"] = "your-llmonitor-app-id"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["promptlayer", "llmonitor"] # log input/output to promptlayer, llmonitor, supabase

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])

OpenAI Proxy - (Docs)

If you don't want to make code changes to add the litellm package to your code base, you can use litellm proxy. Create a server to call 100+ LLMs (Huggingface/Bedrock/TogetherAI/etc) in the OpenAI ChatCompletions & Completions format

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:8000

Step 2: Replace openai base

import openai
client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:8000")
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

Supported Provider (Docs)

Provider Completion Streaming Async Completion Async Streaming
openai
azure
aws - sagemaker
aws - bedrock
cohere
anthropic
huggingface
replicate
together_ai
openrouter
google - vertex_ai
google - palm
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
anyscale

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your GitHub repo
  • submit a PR from there

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors