From 0ecc21f84047c575ec0492342aea680b878cd174 Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Wed, 27 Dec 2023 11:53:35 +0530 Subject: [PATCH] Update README.md - add proxy key management --- README.md | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 3ea9196ab..08f5db073 100644 --- a/README.md +++ b/README.md @@ -115,8 +115,9 @@ response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content # OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy)) -Track spend across multiple projects/people. +Track spend across multiple projects/people +## Quick Start Proxy - CLI ### Step 1: Start litellm proxy ```shell $ litellm --model huggingface/bigcode/starcoder @@ -124,7 +125,7 @@ $ litellm --model huggingface/bigcode/starcoder #INFO: Proxy running on http://0.0.0.0:8000 ``` -### Step 2: Replace openai base +### Step 2: Make ChatCompletions Request to Proxy ```python import openai # openai v1.0.0+ client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url @@ -139,6 +140,17 @@ response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [ print(response) ``` +## Proxy Key Management ([Docs](https://docs.litellm.ai/docs/proxy/virtual_keys)) +Track Spend, Set budgets and create virtual keys for the proxy +`POST /key/generate` + +```shell +curl 'http://0.0.0.0:8000/key/generate' \ +--header 'Authorization: Bearer sk-1234' \ +--header 'Content-Type: application/json' \ +--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}' +``` + ### [Beta] Proxy UI A simple UI to add new models and let your users create keys.