LiteLLM fork
Find a file
2023-09-30 18:09:16 -07:00
.circleci Update boto3 dependency to version 1.28.57, refactor bedrock client initialization and remove troubleshooting guide from documentation. 2023-09-30 12:31:26 +08:00
.github Update FUNDING.yml 2023-09-22 09:51:35 -07:00
cookbook Update boto3 dependency to version 1.28.57, refactor bedrock client initialization and remove troubleshooting guide from documentation. 2023-09-30 12:31:26 +08:00
dist remove trust remote code option 2023-09-26 19:20:48 -07:00
docs/my-website docs 2023-09-30 16:57:33 -07:00
litellm improvements to proxy cli and finish reason mapping for anthropic 2023-09-30 18:09:16 -07:00
proxy-server@248fb7c75d fix streaming test 2023-09-30 15:59:27 -07:00
.all-contributorsrc Create .all-contributorsrc 2023-08-28 08:52:35 -07:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore update values 2023-09-29 20:53:55 -07:00
.gitmodules updates 2023-09-19 13:24:24 -07:00
.readthedocs.yaml Update .readthedocs.yaml 2023-07-29 12:54:38 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
mkdocs.yml feat: traceloop docs 2023-08-29 00:13:07 +02:00
model_prices_and_context_window.json added model openrouter/mistralai/mistral-7b-instruct with test 2023-09-30 16:49:18 +01:00
poetry.lock improvements to proxy cli and finish reason mapping for anthropic 2023-09-30 18:09:16 -07:00
pyproject.toml improvements to proxy cli and finish reason mapping for anthropic 2023-09-30 18:09:16 -07:00
README.md Update README.md 2023-09-30 09:48:45 -07:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

PyPI Version CircleCI Y Combinator W23

Docs Discord 100+ Supported Models

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

Usage

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

OpenAI Proxy Server (Docs)

Spin up a local server to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

litellm --model <model_name>

Running your model locally or on a custom endpoint ? Set the --api-base parameter see how

Supported Provider (Docs)

Provider Completion Streaming Async Completion Async Streaming
openai
cohere
anthropic
replicate
huggingface
together_ai
openrouter
vertex_ai
palm
ai21
baseten
azure
sagemaker
bedrock
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your github repo
  • submit a PR from there

Learn more on how to make a PR

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors