diff --git a/docs/my-website/docs/proxy/db_info.md b/docs/my-website/docs/proxy/db_info.md
new file mode 100644
index 000000000..6e6a48bd1
--- /dev/null
+++ b/docs/my-website/docs/proxy/db_info.md
@@ -0,0 +1,67 @@
+# What is stored in the DB
+
+The LiteLLM Proxy uses a PostgreSQL database to store various information. Here's are the main features the DB is used for:
+- Virtual Keys, Organizations, Teams, Users, Budgets, and more.
+- Per request Usage Tracking
+
+## Link to DB Schema
+
+You can see the full DB Schema [here](https://github.com/BerriAI/litellm/blob/main/schema.prisma)
+
+## DB Tables
+
+### Organizations, Teams, Users, End Users
+
+| Table Name | Description | Row Insert Frequency |
+|------------|-------------|---------------------|
+| LiteLLM_OrganizationTable | Manages organization-level configurations. Tracks organization spend, model access, and metadata. Links to budget configurations and teams. | Low |
+| LiteLLM_TeamTable | Handles team-level settings within organizations. Manages team members, admins, and their roles. Controls team-specific budgets, rate limits, and model access. | Low |
+| LiteLLM_UserTable | Stores user information and their settings. Tracks individual user spend, model access, and rate limits. Manages user roles and team memberships. | Low |
+| LiteLLM_EndUserTable | Manages end-user configurations. Controls model access and regional requirements. Tracks end-user spend. | Low |
+| LiteLLM_TeamMembership | Tracks user participation in teams. Manages team-specific user budgets and spend. | Low |
+| LiteLLM_OrganizationMembership | Manages user roles within organizations. Tracks organization-specific user permissions and spend. | Low |
+| LiteLLM_InvitationLink | Handles user invitations. Manages invitation status and expiration. Tracks who created and accepted invitations. | Low |
+| LiteLLM_UserNotifications | Handles model access requests. Tracks user requests for model access. Manages approval status. | Low |
+
+### Authentication
+
+| Table Name | Description | Row Insert Frequency |
+|------------|-------------|---------------------|
+| LiteLLM_VerificationToken | Manages Virtual Keys and their permissions. Controls token-specific budgets, rate limits, and model access. Tracks key-specific spend and metadata. | **Medium** - stores all Virtual Keys |
+
+### Model (LLM) Management
+
+| Table Name | Description | Row Insert Frequency |
+|------------|-------------|---------------------|
+| LiteLLM_ProxyModelTable | Stores model configurations. Defines available models and their parameters. Contains model-specific information and settings. | Low - Configuration only |
+
+### Budget Management
+
+| Table Name | Description | Row Insert Frequency |
+|------------|-------------|---------------------|
+| LiteLLM_BudgetTable | Stores budget and rate limit configurations for organizations, keys, and end users. Tracks max budgets, soft budgets, TPM/RPM limits, and model-specific budgets. Handles budget duration and reset timing. | Low - Configuration only |
+
+
+### Tracking & Logging
+
+| Table Name | Description | Row Insert Frequency |
+|------------|-------------|---------------------|
+| LiteLLM_SpendLogs | Detailed logs of all API requests. Records token usage, spend, and timing information. Tracks which models and keys were used. | **High - every LLM API request** |
+| LiteLLM_ErrorLogs | Captures failed requests and errors. Stores exception details and request information. Helps with debugging and monitoring. | **Medium - on errors only** |
+| LiteLLM_AuditLog | Tracks changes to system configuration. Records who made changes and what was modified. Maintains history of updates to teams, users, and models. | **Off by default**, **High - when enabled** |
+
+## How to Disable `LiteLLM_SpendLogs`
+
+You can disable spend_logs by setting `disable_spend_logs` to `True` on the `general_settings` section of your proxy_config.yaml file.
+
+```yaml
+general_settings:
+ disable_spend_logs: True
+```
+
+
+### What is the impact of disabling `LiteLLM_SpendLogs`?
+
+- You **will not** be able to view Usage on the LiteLLM UI
+- You **will** continue seeing cost metrics on s3, Prometheus, Langfuse (any other Logging integration you are using)
+
diff --git a/docs/my-website/docs/proxy/logging.md b/docs/my-website/docs/proxy/logging.md
index 1bd1b6c4b..bdd29205d 100644
--- a/docs/my-website/docs/proxy/logging.md
+++ b/docs/my-website/docs/proxy/logging.md
@@ -61,7 +61,7 @@ litellm_settings:
Removes any field with `user_api_key_*` from metadata.
-## What gets logged? StandardLoggingPayload
+## What gets logged?
Found under `kwargs["standard_logging_object"]`. This is a standard payload, logged for every response.
@@ -148,6 +148,11 @@ class StandardLoggingModelCostFailureDebugInformation(TypedDict, total=False):
custom_pricing: Optional[bool]
```
+## Conditional Logging for Virtual Keys / Teams
+
+[👉 Tutorial - Allow each team to use their own Langfuse Project / custom callbacks](team_logging)
+
+
## Langfuse
We will use the `--config` to set `litellm.success_callback = ["langfuse"]` this will log all successfull LLM calls to langfuse. Make sure to set `LANGFUSE_PUBLIC_KEY` and `LANGFUSE_SECRET_KEY` in your environment
@@ -300,40 +305,7 @@ print(response)
-### Team based Logging to Langfuse
-
-[👉 Tutorial - Allow each team to use their own Langfuse Project / custom callbacks](team_logging)
-
-
-### Redacting Messages, Response Content from Langfuse Logging
+### Redact Messages, Response Content
Set `litellm.turn_off_message_logging=True` This will prevent the messages and responses from being logged to langfuse, but request metadata will still be logged.
@@ -366,7 +338,7 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
```
-### LiteLLM-specific Tags on Langfuse - `cache_hit`, `cache_key`
+### LiteLLM Tags - `cache_hit`, `cache_key`
Use this if you want to control which LiteLLM-specific fields are logged as tags by the LiteLLM proxy. By default LiteLLM Proxy logs no LiteLLM-specific fields
@@ -401,7 +373,7 @@ litellm_settings:
langfuse_default_tags: ["cache_hit", "cache_key", "proxy_base_url", "user_api_key_alias", "user_api_key_user_id", "user_api_key_user_email", "user_api_key_team_alias", "semantic-similarity", "proxy_base_url"]
```
-### 🔧 Debugging - Viewing RAW CURL sent from LiteLLM to provider
+### View POST sent from LiteLLM to provider
Use this when you want to view the RAW curl request sent from LiteLLM to the LLM API
@@ -504,7 +476,7 @@ You will see `raw_request` in your Langfuse Metadata. This is the RAW CURL comma
-## OpenTelemetry format
+## OpenTelemetry
:::info
@@ -786,7 +758,7 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
** 🎉 Expect to see this trace logged in your OTEL collector**
-### Redacting Messages, Response Content from OTEL Logging
+### Redacting Messages, Response Content
Set `message_logging=False` for `otel`, no messages / response will be logged
@@ -800,7 +772,8 @@ callback_settings:
message_logging: False
```
-### Context propagation across Services `Traceparent HTTP Header`
+### Traceparent Header
+##### Context propagation across Services `Traceparent HTTP Header`
❓ Use this when you want to **pass information about the incoming request in a distributed tracing system**
@@ -850,7 +823,7 @@ Search for Trace=`80e1afed08e019fc1110464cfa66635c` on your OTEL Collector
-### Forwarding `Traceparent HTTP Header` to LLM APIs
+##### Forwarding `Traceparent HTTP Header` to LLM APIs
Use this if you want to forward the traceparent headers to your self hosted LLMs like vLLM
@@ -1095,7 +1068,7 @@ class MyCustomHandler(CustomLogger):
{'mode': 'embedding', 'input_cost_per_token': 0.002}
```
-### Logging responses from proxy
+##### Logging responses from proxy
Both `/chat/completions` and `/embeddings` responses are available as `response_obj`
@@ -1413,7 +1386,7 @@ export GALILEO_USERNAME=""
export GALILEO_PASSWORD=""
```
-### Quick Start
+#### Quick Start
1. Add to Config.yaml
@@ -1454,7 +1427,7 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
🎉 That's it - Expect to see your Logs on your Galileo Dashboard
-## Logging Proxy Cost + Usage - OpenMeter
+## OpenMeter
Bill customers according to their LLM API usage with [OpenMeter](../observability/openmeter.md)
@@ -1466,7 +1439,7 @@ export OPENMETER_API_ENDPOINT="" # defaults to https://openmeter.cloud
export OPENMETER_API_KEY=""
```
-### Quick Start
+##### Quick Start
1. Add to Config.yaml
@@ -1799,7 +1772,10 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
}'
```
-## (BETA) Moderation with Azure Content Safety
+
+
\ No newline at end of file
diff --git a/docs/my-website/docs/proxy/ui.md b/docs/my-website/docs/proxy/ui.md
index e18a9e2e5..5e6e9f52f 100644
--- a/docs/my-website/docs/proxy/ui.md
+++ b/docs/my-website/docs/proxy/ui.md
@@ -64,7 +64,7 @@ Allow others to create/delete their own keys.
Features here are behind a commercial license in our `/enterprise` folder. [**See Code**](https://github.com/BerriAI/litellm/tree/main/enterprise)
-### Setup SSO/Auth for UI
+### SSO for UI
#### Step 1: Set upperbounds for keys
Control the upperbound that users can use for `max_budget`, `budget_duration` or any `key/generate` param per key.
@@ -88,12 +88,6 @@ litellm_settings:
#### Step 2: Setup Oauth Client
-:::tip
-
-Looking for how to use Oauth 2.0 for /chat, /completions API requests to the proxy? [Follow this doc](oauth2)
-
-:::
-
@@ -196,6 +190,13 @@ GENERIC_SCOPE = "openid profile email" # default scope openid is sometimes not e
+### Default Login, Logout URLs
+
+Some SSO providers require a specific redirect url for login and logout. You can input the following values.
+
+- Login: `/sso/key/generate`
+- Logout: ``
+
#### Step 3. Set `PROXY_BASE_URL` in your .env
Set this in your .env (so the proxy can set the correct redirect url)
@@ -216,9 +217,9 @@ export ALLOWED_EMAIL_DOMAINS="berri.ai"
This will check if the user email we receive from SSO contains this domain, before allowing access.
-### Set Admin view w/ SSO
+### Set Proxy Admin
-You just need to set Proxy Admin ID
+Set a Proxy Admin when SSO is enabled. Once SSO is enabled, the `user_id` for users is retrieved from the SSO provider. In order to set a Proxy Admin, you need to copy the `user_id` from the UI and set it in your `.env` as `PROXY_ADMIN_ID`.
#### Step 1: Copy your ID from the UI
@@ -256,7 +257,7 @@ general_settings:
default_team_disabled: true # OR you can set env var PROXY_DEFAULT_TEAM_DISABLED="true"
```
-### Sign in with Username, Password when SSO is on
+### Use Username, Password when SSO is on
If you need to access the UI via username/password when SSO is on navigate to `/fallback/login`. This route will allow you to sign in with your username/password credentials.
diff --git a/docs/my-website/sidebars.js b/docs/my-website/sidebars.js
index dd8443a28..6da9978ba 100644
--- a/docs/my-website/sidebars.js
+++ b/docs/my-website/sidebars.js
@@ -36,7 +36,7 @@ const sidebars = {
{
type: "category",
label: "Architecture",
- items: ["proxy/architecture"],
+ items: ["proxy/architecture", "proxy/db_info"],
},
{
type: "link",