litellm/cookbook/llm-ab-test-server/readme.md
2023-08-25 21:26:54 -07:00

3.9 KiB
Raw Blame History

🚅 LiteLLM - A/B Testing LLMs in Production

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, Azure OpenAI etc.]

PyPI Version Stable Version CircleCI Downloads

100+ Supported Models | Docs | Demo Website

LiteLLM allows you to call 100+ LLMs using completion This template server allows you to define LLMs with their A/B test ratios

llm_dict = {
    "gpt-4": 0.2,
    "together_ai/togethercomputer/llama-2-70b-chat": 0.4,
    "claude-2": 0.2,
    "claude-1.2": 0.2
}

All models defined can be called with the same Input/Output format using litellm completion

from litellm import completion
# SET API KEYS in .env
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
# anthropic
response = completion(model="claude-2", messages=messages)

After running the server all completion resposnes, costs and latency can be viewed on the LiteLLM Client UI

LiteLLM Client UI

Litellm simplifies I/O with all models, the server simply makes a litellm.completion() call to the selected model

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types

Usage

Open In Colab
pip install litellm
from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
os.environ["ANTHROPIC_API_KEY"] = "anthropic key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

# anthropic
response = completion(model="claude-2", messages=messages)

Stable version

pip install litellm==0.1.424

support / talk with founders

why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere