From d915fb87297db65c98984e4e7d0319dbd54bcb0d Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Tue, 26 Dec 2023 11:57:00 +0530 Subject: [PATCH] Update README.md --- README.md | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 3a4f772a8..52d769a67 100644 --- a/README.md +++ b/README.md @@ -24,10 +24,15 @@ -LiteLLM manages -- Translating inputs to the provider's `completion` and `embedding` endpoints -- Guarantees [consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']` -- Load-balance across multiple deployments (e.g. Azure/OpenAI) - `Router` **1k+ requests/second** +This Package Provides: +- Python client to call 100+ LLMs in OpenAI Format + - Translate inputs to provider's `completion` and `embedding` endpoints + - [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']` + - Load-balance multiple deployments (e.g. Azure/OpenAI) - `Router` **1k+ requests/second** +- OpenAI Proxy Server: + - Track spend across multiple projects/people + - Call 100+ LLMs in OpenAI Format + # OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))