Find a file
2023-09-02 15:45:45 -07:00
.circleci linting fix 2023-08-28 13:14:58 -07:00
.github/ISSUE_TEMPLATE Update issue templates 2023-08-28 18:33:18 -07:00
cookbook optional set your own ID 2023-09-01 20:59:18 -07:00
dist update to logging 2023-09-02 15:45:45 -07:00
docs remove ui debugger from docs 2023-09-02 13:40:05 -07:00
litellm update to logging 2023-09-02 15:45:45 -07:00
proxy-server feat: traceloop docs 2023-08-29 00:13:07 +02:00
.all-contributorsrc Create .all-contributorsrc 2023-08-28 08:52:35 -07:00
.DS_Store add togetherai tutorial to docs 2023-08-15 21:23:55 -07:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore Updated the favicon 2023-08-22 14:50:17 +03:00
.readthedocs.yaml Update .readthedocs.yaml 2023-07-29 12:54:38 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
mkdocs.yml feat: traceloop docs 2023-08-29 00:13:07 +02:00
poetry.lock update to logging 2023-09-02 15:45:45 -07:00
pyproject.toml loosen python dotenv version requirements 2023-09-02 15:25:34 -07:00
README.md Update README.md 2023-08-29 10:39:33 -07:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

PyPI Version Stable Version CircleCI Downloads Y Combinator W23

Open In Colab

100+ Supported Models | Docs | Demo Website

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

Usage

Open In Colab
pip install litellm
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
os.environ["ANTHROPIC_API_KEY"] = "anthropic key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

# anthropic
response = completion(model="claude-2", messages=messages)

Stable version

pip install litellm==0.1.424

Streaming

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors