litellm-mirror/docs/my-website/docs/index.md
2023-09-08 15:07:14 -07:00

2.9 KiB
Raw Blame History

displayed_sidebar
tutorialSidebar

Litellm

import QueryParamReader from '../src/components/queryParamReader.js'

PyPI Version PyPI Version CircleCI Downloads litellm

a light package to simplify calling OpenAI, Azure, Cohere, Anthropic, Huggingface API Endpoints. It manages:

  • translating inputs to the provider's completion and embedding endpoints
  • guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • exception mapping - common exceptions across providers are mapped to the OpenAI exception types

usage

None

Demo - https://litellm.ai/playground
Read the docs - https://docs.litellm.ai/docs/

quick start

pip install litellm

Code Sample: Getting Started Notebook

Stable version

pip install litellm==0.1.345

Streaming Queries

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

support / talk with founders

why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere