LiteLLM fork
Find a file
2023-10-09 13:30:51 -07:00
.circleci Updated config.yml 2023-10-09 13:30:51 -07:00
.github Update FUNDING.yml 2023-09-22 09:51:35 -07:00
cookbook (docs) azure cookbook 2023-10-07 13:22:24 -07:00
dist remove trust remote code option 2023-09-26 19:20:48 -07:00
docs/my-website docs(stream.md): fix async streaming tutorial 2023-10-09 11:47:49 -07:00
litellm fix(proxy_cli): accept drop params and add_function_to_prompt 2023-10-09 13:10:07 -07:00
proxy-server@f99c49e224 update docs 2023-10-04 11:30:54 -07:00
.all-contributorsrc Create .all-contributorsrc 2023-08-28 08:52:35 -07:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore ci(test_logging): need to rewrite test_logging to work in parallel testing 2023-10-07 19:18:09 -07:00
.gitmodules updates 2023-09-19 13:24:24 -07:00
data_map.txt ci(test_logging): need to rewrite test_logging to work in parallel testing 2023-10-07 19:18:09 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
litellm_results.jsonl ci(test_logging): have a change to make circle ci run our testing 2023-10-07 18:33:24 -07:00
model_prices_and_context_window.json added model openrouter/mistralai/mistral-7b-instruct with test 2023-09-30 16:49:18 +01:00
poetry.lock add support for ai21 input params 2023-10-03 21:05:28 -07:00
pyproject.toml bump: version 0.6.0 → 0.6.1 2023-10-09 13:10:14 -07:00
README.md docs(readme update): adding issues links 2023-10-07 17:57:44 -05:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

Bug Report · Feature Request

PyPI Version CircleCI Y Combinator W23

Docs 100+ Supported Models Demo Video

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

🚨 Seeing errors? Chat on WhatsApp Chat on Discord

05/10/2023: LiteLLM is adopting Semantic Versioning for all commits. Learn more

Usage

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

Caching (Docs)

LiteLLM supports caching completion() and embedding() calls for all LLMs. Hosted Cache LiteLLM API

import litellm
from litellm.caching import Cache
import os

litellm.cache = Cache()
os.environ['OPENAI_API_KEY'] = ""
# add to cache
response1 = litellm.completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "why is LiteLLM amazing?"}], 
    caching=True
)
# returns cached response
response2 = litellm.completion(
    model="gpt-3.5-turbo", 
    messages=[{"role": "user", "content": "why is LiteLLM amazing?"}], 
    caching=True
)

print(f"response1: {response1}")
print(f"response2: {response2}")

OpenAI Proxy Server (Docs)

Spin up a local server to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

litellm --model <model_name>

Running your model locally or on a custom endpoint ? Set the --api-base parameter see how

Supported Provider (Docs)

Provider Completion Streaming Async Completion Async Streaming
openai
cohere
anthropic
replicate
huggingface
together_ai
openrouter
vertex_ai
palm
ai21
baseten
azure
sagemaker
bedrock
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your GitHub repo
  • submit a PR from there

Learn more on how to make a PR

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors