forked from phoenix/litellm-mirror
LiteLLM fork
.circleci | ||
community_resources | ||
docs | ||
litellm | ||
test-results | ||
.DS_Store | ||
.env.example | ||
.gitignore | ||
.readthedocs.yaml | ||
LICENSE | ||
mkdocs.yml | ||
poetry.lock | ||
pyproject.toml | ||
README.md | ||
requirements.txt |
🚅 litellm
Get Support / Join the community 👉
a simple & light package to call OpenAI, Azure, Cohere, Anthropic API Endpoints
litellm manages:
- translating inputs to completion and embedding endpoints
- guarantees consistent output, text responses will always be available at
['choices'][0]['message']['content']
usage
Read the docs - https://litellm.readthedocs.io/en/latest/
quick start
pip install litellm
from litellm import completion
## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion("command-nightly", messages)
# azure openai call
response = completion("chatgpt-test", messages, azure=True)
# openrouter call
response = completion("google/palm-2-codechat-bison", messages)
Code Sample: Getting Started Notebook
Stable version
pip install litellm==0.1.345
Streaming Queries
liteLLM supports streaming the model response back, pass stream=True
to get a streaming iterator in response.
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
print(chunk['choices'][0]['delta'])
hosted version
why did we build this
- Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere
Support
Contact us at ishaan@berri.ai / krrish@berri.ai