From af3ef83963c02e41f6afac64516a4ed1da9df95f Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Fri, 25 Aug 2023 21:31:50 -0700 Subject: [PATCH] Update readme.md --- cookbook/llm-ab-test-server/readme.md | 59 +++++++++++---------------- 1 file changed, 24 insertions(+), 35 deletions(-) diff --git a/cookbook/llm-ab-test-server/readme.md b/cookbook/llm-ab-test-server/readme.md index 425a99133..0d771fd2d 100644 --- a/cookbook/llm-ab-test-server/readme.md +++ b/cookbook/llm-ab-test-server/readme.md @@ -51,54 +51,43 @@ response = completion(model="command-nightly", messages=messages) response = completion(model="claude-2", messages=messages) ``` -After running the server all completion resposnes, costs and latency can be viewed on the LiteLLM Client UI +After calling `completion()` costs and latency can be viewed on the LiteLLM Client UI ### LiteLLM Client UI +![pika-1693023669579-1x](https://github.com/BerriAI/litellm/assets/29436595/86633e2f-eda0-4939-a588-84e4c100f36a) -Litellm simplifies I/O with all models, the server simply makes a `litellm.completion()` call to the selected model - - - -- Translating inputs to the provider's completion and embedding endpoints -- Guarantees [consistent output](https://litellm.readthedocs.io/en/latest/output/), text responses will always be available at `['choices'][0]['message']['content']` -- Exception mapping - common exceptions across providers are mapped to the [OpenAI exception types](https://help.openai.com/en/articles/6897213-openai-library-error-types-guidance) -# Usage - - - Open In Colab - - - +## Using LiteLLM A/B Testing Server +# Installation ``` pip install litellm ``` -```python -from litellm import completion - -## set ENV variables -os.environ["OPENAI_API_KEY"] = "openai key" -os.environ["COHERE_API_KEY"] = "cohere key" -os.environ["ANTHROPIC_API_KEY"] = "anthropic key" - -messages = [{ "content": "Hello, how are you?","role": "user"}] - -# openai call -response = completion(model="gpt-3.5-turbo", messages=messages) - -# cohere call -response = completion(model="command-nightly", messages=messages) - -# anthropic -response = completion(model="claude-2", messages=messages) -``` - Stable version ``` pip install litellm==0.1.424 ``` +## Clone LiteLLM Git Repo +``` +git clone https://github.com/BerriAI/litellm/ +``` + +## Navigate to LiteLLM-A/B Test Server +``` +cd litellm/cookbook/llm-ab-test-server +``` + +## Run the Server +``` +python3 main.py +``` + +## Set your LLM Configs +Set your LLMs and LLM weights you want to run A/B testing with + + +