litellm/cookbook/llm-ab-test-server
2023-08-25 21:31:50 -07:00
..
main.py add ab testing demo 2023-08-25 21:23:27 -07:00
readme.md Update readme.md 2023-08-25 21:31:50 -07:00

🚅 LiteLLM - A/B Testing LLMs in Production

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, Azure OpenAI etc.]

PyPI Version Stable Version CircleCI Downloads

100+ Supported Models | Docs | Demo Website

LiteLLM allows you to call 100+ LLMs using completion This template server allows you to define LLMs with their A/B test ratios

llm_dict = {
    "gpt-4": 0.2,
    "together_ai/togethercomputer/llama-2-70b-chat": 0.4,
    "claude-2": 0.2,
    "claude-1.2": 0.2
}

All models defined can be called with the same Input/Output format using litellm completion

from litellm import completion
# SET API KEYS in .env
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
# anthropic
response = completion(model="claude-2", messages=messages)

After calling completion() costs and latency can be viewed on the LiteLLM Client UI

LiteLLM Client UI

pika-1693023669579-1x

Using LiteLLM A/B Testing Server

Installation

pip install litellm

Stable version

pip install litellm==0.1.424

Clone LiteLLM Git Repo

git clone https://github.com/BerriAI/litellm/

Navigate to LiteLLM-A/B Test Server

cd litellm/cookbook/llm-ab-test-server

Run the Server

python3 main.py

Set your LLM Configs

Set your LLMs and LLM weights you want to run A/B testing with

support / talk with founders

why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere