From fde7c0ec97f8274d6982961460e4a5fd9de8a75c Mon Sep 17 00:00:00 2001 From: Krish Dholakia Date: Tue, 26 Dec 2023 12:27:37 +0530 Subject: [PATCH] Update README.md --- README.md | 70 +++++++++++++++++++++++++++---------------------------- 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/README.md b/README.md index 52d769a67..0d04f91dd 100644 --- a/README.md +++ b/README.md @@ -24,43 +24,14 @@ -This Package Provides: -- Python client to call 100+ LLMs in OpenAI Format - - Translate inputs to provider's `completion` and `embedding` endpoints - - [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']` - - Load-balance multiple deployments (e.g. Azure/OpenAI) - `Router` **1k+ requests/second** -- OpenAI Proxy Server: - - Track spend across multiple projects/people - - Call 100+ LLMs in OpenAI Format +LiteLLM manages: +- Translate inputs to provider's `completion` and `embedding` endpoints +- [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']` +- Load-balance multiple deployments (e.g. Azure/OpenAI) - `Router` **1k+ requests/second** +[**Jump to OpenAI Proxy Docs**](https://github.com/BerriAI/litellm?tab=readme-ov-file#openai-proxy---docs) -# OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy)) - -Track spend across multiple projects/people. - -### Step 1: Start litellm proxy -```shell -$ litellm --model huggingface/bigcode/starcoder - -#INFO: Proxy running on http://0.0.0.0:8000 -``` - -### Step 2: Replace openai base -```python -import openai # openai v1.0.0+ -client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url -# request sent to model set on litellm proxy, `litellm --model` -response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [ - { - "role": "user", - "content": "this is a test request, write a short poem" - } -]) - -print(response) -``` - -# Usage ([**Docs**](https://docs.litellm.ai/docs/)) +# Installation 🚀 > [!IMPORTANT] > LiteLLM v1.0.0 now requires `openai>=1.0.0`. Migration guide [here](https://docs.litellm.ai/docs/migration) @@ -70,10 +41,13 @@ print(response) Open In Colab + ``` pip install litellm ``` +# Usage ([**Docs**](https://docs.litellm.ai/docs/)) + ```python from litellm import completion import os @@ -142,6 +116,32 @@ litellm.success_callback = ["langfuse", "llmonitor"] # log input/output to langf response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}]) ``` +# OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy)) + +Track spend across multiple projects/people. + +### Step 1: Start litellm proxy +```shell +$ litellm --model huggingface/bigcode/starcoder + +#INFO: Proxy running on http://0.0.0.0:8000 +``` + +### Step 2: Replace openai base +```python +import openai # openai v1.0.0+ +client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url +# request sent to model set on litellm proxy, `litellm --model` +response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [ + { + "role": "user", + "content": "this is a test request, write a short poem" + } +]) + +print(response) +``` + ## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers)) | Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | | ------------- | ------------- | ------------- | ------------- | ------------- |