forked from phoenix/litellm-mirror
docs cleanup
This commit is contained in:
parent
eff874bf05
commit
d9c91838ce
2 changed files with 6 additions and 5 deletions
|
@ -17,11 +17,11 @@ You can use litellm through either:
|
||||||
1. [LiteLLM Proxy Server](#openai-proxy) - Server (LLM Gateway) to call 100+ LLMs, load balance, cost tracking across projects
|
1. [LiteLLM Proxy Server](#openai-proxy) - Server (LLM Gateway) to call 100+ LLMs, load balance, cost tracking across projects
|
||||||
2. [LiteLLM python SDK](#basic-usage) - Python Client to call 100+ LLMs, load balance, cost tracking
|
2. [LiteLLM python SDK](#basic-usage) - Python Client to call 100+ LLMs, load balance, cost tracking
|
||||||
|
|
||||||
### When to use LiteLLM Proxy Server
|
### **When to use LiteLLM Proxy Server (LLM Gateway)**
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
Use LiteLLM Proxy Server if you want a **central service to access multiple LLMs**
|
Use LiteLLM Proxy Server if you want a **central service (LLM Gateway) to access multiple LLMs**
|
||||||
|
|
||||||
Typically used by Gen AI Enablement / ML PLatform Teams
|
Typically used by Gen AI Enablement / ML PLatform Teams
|
||||||
|
|
||||||
|
@ -31,7 +31,7 @@ Typically used by Gen AI Enablement / ML PLatform Teams
|
||||||
- Track LLM Usage and setup guardrails
|
- Track LLM Usage and setup guardrails
|
||||||
- Customize Logging, Guardrails, Caching per project
|
- Customize Logging, Guardrails, Caching per project
|
||||||
|
|
||||||
### When to use LiteLLM Python SDK
|
### **When to use LiteLLM Python SDK**
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
|
@ -44,6 +44,7 @@ Typically used by developers building llm projects
|
||||||
- LiteLLM SDK gives you a unified interface to access multiple LLMs (100+ LLMs)
|
- LiteLLM SDK gives you a unified interface to access multiple LLMs (100+ LLMs)
|
||||||
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing)
|
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing)
|
||||||
|
|
||||||
|
## **LiteLLM Python SDK**
|
||||||
|
|
||||||
### Basic usage
|
### Basic usage
|
||||||
|
|
||||||
|
@ -383,7 +384,7 @@ response = completion(
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
## OpenAI Proxy
|
## **LiteLLM Proxy Server (LLM Gateway)**
|
||||||
|
|
||||||
Track spend across multiple projects/people
|
Track spend across multiple projects/people
|
||||||
|
|
||||||
|
|
|
@ -23,7 +23,7 @@ const sidebars = {
|
||||||
label: "💥 LiteLLM Proxy Server",
|
label: "💥 LiteLLM Proxy Server",
|
||||||
link: {
|
link: {
|
||||||
type: "generated-index",
|
type: "generated-index",
|
||||||
title: "💥 LiteLLM Proxy Server",
|
title: "💥 LiteLLM Proxy Server (LLM Gateway)",
|
||||||
description: `OpenAI Proxy Server (LLM Gateway) to call 100+ LLMs in a unified interface & track spend, set budgets per virtual key/user`,
|
description: `OpenAI Proxy Server (LLM Gateway) to call 100+ LLMs in a unified interface & track spend, set budgets per virtual key/user`,
|
||||||
slug: "/simple_proxy",
|
slug: "/simple_proxy",
|
||||||
},
|
},
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue