diff --git a/docs/my-website/docs/index.md b/docs/my-website/docs/index.md index 4c1cdd017..0712c3034 100644 --- a/docs/my-website/docs/index.md +++ b/docs/my-website/docs/index.md @@ -17,11 +17,11 @@ You can use litellm through either: 1. [LiteLLM Proxy Server](#openai-proxy) - Server (LLM Gateway) to call 100+ LLMs, load balance, cost tracking across projects 2. [LiteLLM python SDK](#basic-usage) - Python Client to call 100+ LLMs, load balance, cost tracking -### When to use LiteLLM Proxy Server +### **When to use LiteLLM Proxy Server (LLM Gateway)** :::tip -Use LiteLLM Proxy Server if you want a **central service to access multiple LLMs** +Use LiteLLM Proxy Server if you want a **central service (LLM Gateway) to access multiple LLMs** Typically used by Gen AI Enablement / ML PLatform Teams @@ -31,7 +31,7 @@ Typically used by Gen AI Enablement / ML PLatform Teams - Track LLM Usage and setup guardrails - Customize Logging, Guardrails, Caching per project -### When to use LiteLLM Python SDK +### **When to use LiteLLM Python SDK** :::tip @@ -44,6 +44,7 @@ Typically used by developers building llm projects - LiteLLM SDK gives you a unified interface to access multiple LLMs (100+ LLMs) - Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing) +## **LiteLLM Python SDK** ### Basic usage @@ -383,7 +384,7 @@ response = completion( ) ``` -## OpenAI Proxy +## **LiteLLM Proxy Server (LLM Gateway)** Track spend across multiple projects/people diff --git a/docs/my-website/sidebars.js b/docs/my-website/sidebars.js index ecfe94b3c..a4e59a845 100644 --- a/docs/my-website/sidebars.js +++ b/docs/my-website/sidebars.js @@ -23,7 +23,7 @@ const sidebars = { label: "💥 LiteLLM Proxy Server", link: { type: "generated-index", - title: "💥 LiteLLM Proxy Server", + title: "💥 LiteLLM Proxy Server (LLM Gateway)", description: `OpenAI Proxy Server (LLM Gateway) to call 100+ LLMs in a unified interface & track spend, set budgets per virtual key/user`, slug: "/simple_proxy", },