diff --git a/README.md b/README.md index 6c81181f3..3caeb830b 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]

-

OpenAI Proxy Server | Enterprise Tier

+

OpenAI Proxy Server | Hosted Proxy (Preview) | Enterprise Tier

PyPI Version @@ -128,7 +128,9 @@ response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content # OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy)) -Set Budgets & Rate limits across multiple projects +Track spend + Load Balance across multiple projects + +[Hosted Proxy (Preview)](https://docs.litellm.ai/docs/hosted) The proxy provides: