LiteLLM fork
Find a file
2023-09-25 09:18:44 -07:00
.circleci Updated config.yml 2023-09-21 14:30:20 -07:00
.github Update FUNDING.yml 2023-09-22 09:51:35 -07:00
cookbook getting started cookbook litellm 2023-09-23 10:38:59 -07:00
dist handle llama 2 eos tokens in streaming 2023-09-18 13:44:19 -07:00
docs/my-website docs 2023-09-23 13:14:29 -07:00
litellm bump version and fix testing issue 2023-09-23 23:07:41 -07:00
proxy-server@bbe0f62e3a update config docs 2023-09-21 18:30:48 -07:00
.all-contributorsrc Create .all-contributorsrc 2023-08-28 08:52:35 -07:00
.env.example feat: added support for OPENAI_API_BASE 2023-08-28 14:57:34 +02:00
.gitattributes ignore ipynbs 2023-08-31 16:58:54 -07:00
.gitignore remove DS_Store 2023-09-10 11:26:18 -04:00
.gitmodules updates 2023-09-19 13:24:24 -07:00
.readthedocs.yaml Update .readthedocs.yaml 2023-07-29 12:54:38 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
mkdocs.yml feat: traceloop docs 2023-08-29 00:13:07 +02:00
model_prices_and_context_window.json add missing litellm_provider for gpt-3.5-16k-0613 2023-09-24 09:51:15 -05:00
poetry.lock adding support for cohere, anthropic, llama2 tokenizers 2023-09-22 14:03:52 -07:00
pyproject.toml remove tokenizers version requirement 2023-09-25 09:18:44 -07:00
README.md Update README.md 2023-09-20 13:40:48 -07:00

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

PyPI Version Stable Version CircleCI Downloads Y Combinator W23 git commit activity

Open In Colab

100+ Supported Models | Docs |

📣1-click deploy your own LLM proxy server. Grab time, if you're interested!

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

Usage

Open In Colab

By default we provide a free $10 key to try all providers supported on LiteLLM. Try it now 👇

pip install litellm
from litellm import completion
import os

## We provide a free $10 key to try all providers supported on LiteLLM.
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Don't have a key? We'll give you access 👉 https://docs.litellm.ai/docs/proxy_api

Streaming

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your github repo
  • submit a PR from there

Learn more on how to make a PR

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere

Contributors