From 3b89cff65ea0b943e112510495171d261dfbac07 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Wed, 29 Nov 2023 16:33:00 -0800 Subject: [PATCH] (docs) simple proxy --- docs/my-website/docs/simple_proxy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/my-website/docs/simple_proxy.md b/docs/my-website/docs/simple_proxy.md index f59c819021..18f5aeaff7 100644 --- a/docs/my-website/docs/simple_proxy.md +++ b/docs/my-website/docs/simple_proxy.md @@ -7,8 +7,8 @@ import TabItem from '@theme/TabItem'; LiteLLM Server manages: * Calling 100+ LLMs [Huggingface/Bedrock/TogetherAI/etc.](#other-supported-models) in the OpenAI `ChatCompletions` & `Completions` format +* Load balancing - between [Multiple Models](#multiple-models---quick-start) + [Deployments of the same model](#multiple-instances-of-1-model) **LiteLLM proxy can handle 1k+ requests/second during load tests** * Authentication & Spend Tracking [Virtual Keys](#managing-auth---virtual-keys) -* Load balancing - Routing between [Multiple Models](#multiple-models---quick-start) + [Deployments of the same model](#multiple-instances-of-1-model) [**See LiteLLM Proxy code**](https://github.com/BerriAI/litellm/tree/main/litellm/proxy)