Find a file
2023-09-08 15:07:14 -07:00
.circleci async streaming generator 2023-09-07 13:53:40 -07:00
.github/ISSUE_TEMPLATE Update issue templates 2023-08-28 18:33:18 -07:00
cookbook fix supported LLM on docs 2023-09-08 15:07:14 -07:00
dist batch completions for vllm now works too 2023-09-06 19:26:19 -07:00
docs fix supported LLM on docs 2023-09-08 15:07:14 -07:00
litellm fix supported LLM on docs 2023-09-08 15:07:14 -07:00
proxy-server feat: traceloop docs 2023-08-29 00:13:07 +02:00
.all-contributorsrc Create .all-contributorsrc 2023-08-28 08:52:35 -07:00
.DS_Store add togetherai tutorial to docs 2023-08-15 21:23:55 -07:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore Updated the favicon 2023-08-22 14:50:17 +03:00
.readthedocs.yaml Update .readthedocs.yaml 2023-07-29 12:54:38 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
mkdocs.yml feat: traceloop docs 2023-08-29 00:13:07 +02:00
model_prices_and_context_window.json add claude 1.2 pricing + context window 2023-09-07 16:24:57 -07:00
poetry.lock update to logging 2023-09-02 15:45:45 -07:00
pyproject.toml docs update 2023-09-08 14:01:19 -07:00
README.md Update README.md 2023-09-08 15:04:17 -07:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

PyPI Version Stable Version CircleCI Downloads Y Combinator W23 git commit activity

Open In Colab

100+ Supported Models | Docs | Demo Website

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

🤝 Schedule a 1-on-1 Session: Book a 1-on-1 session with Krrish and Ishaan, the founders, to discuss any issues, provide feedback, or explore how we can improve LiteLLM for you.

Usage

Open In Colab
pip install litellm
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
os.environ["ANTHROPIC_API_KEY"] = "anthropic key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

# anthropic
response = completion(model="claude-2", messages=messages)

Stable version

pip install litellm==0.1.424

Streaming

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors