LiteLLM fork
Find a file
2023-08-21 21:38:44 +02:00
.circleci linting 4 2023-08-18 12:28:43 -07:00
cookbook almost working llmonitor 2023-08-21 16:26:47 +02:00
dist adding lite debugger integration 2023-08-20 15:06:50 -07:00
docs add missing references 2023-08-21 21:38:44 +02:00
litellm fix error tracking 2023-08-21 21:31:09 +02:00
proxy-server add missing references 2023-08-21 21:38:44 +02:00
test-results Create t1.txt 2023-08-01 15:39:19 -07:00
.DS_Store add togetherai tutorial to docs 2023-08-15 21:23:55 -07:00
.env.example Added Infisical token to .env.example 2023-08-11 13:37:44 +03:00
.gitignore updating crash monitoring 2023-08-10 11:13:30 -07:00
.readthedocs.yaml Update .readthedocs.yaml 2023-07-29 12:54:38 -07:00
LICENSE Initial commit 2023-07-26 17:09:52 -07:00
mkdocs.yml add missing references 2023-08-21 21:38:44 +02:00
poetry.lock adding additional ways of doing testing 2023-08-16 11:19:05 -07:00
pyproject.toml updates 2023-08-21 06:05:45 -07:00
README.md Update README.md 2023-08-19 16:33:41 -07:00

🚅 litellm

PyPI Version PyPI Version CircleCI Downloads litellm

a light package to simplify calling OpenAI, Azure, Cohere, Anthropic, Huggingface API Endpoints. It manages:

  • translating inputs to the provider's completion and embedding endpoints
  • guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • exception mapping - common exceptions across providers are mapped to the OpenAI exception types

usage

None

Demo - https://litellm.ai/playground
Read the docs - https://docs.litellm.ai/docs/

quick start

pip install litellm
from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages)

Code Sample: Getting Started Notebook

Stable version

pip install litellm==0.1.424

Streaming Queries

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

support / talk with founders

why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere