diff --git a/docs/my-website/docs/enterprise.md b/docs/my-website/docs/enterprise.md
index 0d57b4c25..0edf937ed 100644
--- a/docs/my-website/docs/enterprise.md
+++ b/docs/my-website/docs/enterprise.md
@@ -10,6 +10,7 @@ For companies that need SSO, user management and professional support for LiteLL
This covers:
- ✅ **Features under the [LiteLLM Commercial License (Content Mod, Custom Tags, etc.)](https://docs.litellm.ai/docs/proxy/enterprise)**
- ✅ [**Secure UI access with Single Sign-On**](../docs/proxy/ui.md#setup-ssoauth-for-ui)
+- ✅ [**Audit Logs with retention policy**](../docs/proxy/enterprise.md#audit-logs)
- ✅ [**JWT-Auth**](../docs/proxy/token_auth.md)
- ✅ [**Prompt Injection Detection**](#prompt-injection-detection-lakeraai)
- ✅ [**Invite Team Members to access `/spend` Routes**](../docs/proxy/cost_tracking#allowing-non-proxy-admins-to-access-spend-endpoints)
diff --git a/docs/my-website/docs/proxy/enterprise.md b/docs/my-website/docs/proxy/enterprise.md
index 8e2b79a5f..e52a19162 100644
--- a/docs/my-website/docs/proxy/enterprise.md
+++ b/docs/my-website/docs/proxy/enterprise.md
@@ -2,30 +2,213 @@ import Image from '@theme/IdealImage';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-# ✨ Enterprise Features - Content Mod, SSO, Custom Swagger
+# ✨ Enterprise Features - SSO, Audit Logs, Guardrails
-Features here are behind a commercial license in our `/enterprise` folder. [**See Code**](https://github.com/BerriAI/litellm/tree/main/enterprise)
+:::tip
-:::info
-
-[Get Started with Enterprise here](https://github.com/BerriAI/litellm/tree/main/enterprise)
+Get in touch with us [here](https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat)
:::
Features:
- ✅ [SSO for Admin UI](./ui.md#✨-enterprise-features)
-- ✅ Content Moderation with LLM Guard, LlamaGuard, Google Text Moderations
-- ✅ [Prompt Injection Detection (with LakeraAI API)](#prompt-injection-detection-lakeraai)
+- ✅ [Audit Logs](#audit-logs)
+- ✅ [Tracking Spend for Custom Tags](#tracking-spend-for-custom-tags)
+- ✅ [Content Moderation with LLM Guard, LlamaGuard, Google Text Moderations](#content-moderation)
+- ✅ [Prompt Injection Detection (with LakeraAI API)](#prompt-injection-detection---lakeraai)
+- ✅ [Custom Branding + Routes on Swagger Docs](#swagger-docs---custom-routes--branding)
- ✅ Reject calls from Blocked User list
- ✅ Reject calls (incoming / outgoing) with Banned Keywords (e.g. competitors)
-- ✅ Don't log/store specific requests to Langfuse, Sentry, etc. (eg confidential LLM requests)
-- ✅ Tracking Spend for Custom Tags
-- ✅ Custom Branding + Routes on Swagger Docs
-- ✅ Audit Logs for `Created At, Created By` when Models Added
+
+## Audit Logs
+
+Store Audit logs for **Create, Update Delete Operations** done on `Teams` and `Virtual Keys`
+
+**Step 1** Switch on audit Logs
+```shell
+litellm_settings:
+ store_audit_logs: true
+```
+
+Start the litellm proxy with this config
+
+**Step 2** Test it - Create a Team
+
+```shell
+curl --location 'http://0.0.0.0:4000/team/new' \
+ --header 'Authorization: Bearer sk-1234' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "max_budget": 2
+ }'
+```
+
+**Step 3** Expected Log
+
+```json
+{
+ "id": "e1760e10-4264-4499-82cd-c08c86c8d05b",
+ "updated_at": "2024-06-06T02:10:40.836420+00:00",
+ "changed_by": "109010464461339474872",
+ "action": "created",
+ "table_name": "LiteLLM_TeamTable",
+ "object_id": "82e725b5-053f-459d-9a52-867191635446",
+ "before_value": null,
+ "updated_values": {
+ "team_id": "82e725b5-053f-459d-9a52-867191635446",
+ "admins": [],
+ "members": [],
+ "members_with_roles": [
+ {
+ "role": "admin",
+ "user_id": "109010464461339474872"
+ }
+ ],
+ "max_budget": 2.0,
+ "models": [],
+ "blocked": false
+ }
+}
+```
+
+
+## Tracking Spend for Custom Tags
+
+Requirements:
+
+- Virtual Keys & a database should be set up, see [virtual keys](https://docs.litellm.ai/docs/proxy/virtual_keys)
+
+#### Usage - /chat/completions requests with request tags
+
+
+
+
+
+
+
+Set `extra_body={"metadata": { }}` to `metadata` you want to pass
+
+```python
+import openai
+client = openai.OpenAI(
+ api_key="anything",
+ base_url="http://0.0.0.0:4000"
+)
+
+# request sent to model set on litellm proxy, `litellm --model`
+response = client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages = [
+ {
+ "role": "user",
+ "content": "this is a test request, write a short poem"
+ }
+ ],
+ extra_body={
+ "metadata": {
+ "tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
+ }
+ }
+)
+
+print(response)
+```
+
+
+
+
+Pass `metadata` as part of the request body
+
+```shell
+curl --location 'http://0.0.0.0:4000/chat/completions' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "model": "gpt-3.5-turbo",
+ "messages": [
+ {
+ "role": "user",
+ "content": "what llm are you"
+ }
+ ],
+ "metadata": {"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]}
+}'
+```
+
+
+
+```python
+from langchain.chat_models import ChatOpenAI
+from langchain.prompts.chat import (
+ ChatPromptTemplate,
+ HumanMessagePromptTemplate,
+ SystemMessagePromptTemplate,
+)
+from langchain.schema import HumanMessage, SystemMessage
+
+chat = ChatOpenAI(
+ openai_api_base="http://0.0.0.0:4000",
+ model = "gpt-3.5-turbo",
+ temperature=0.1,
+ extra_body={
+ "metadata": {
+ "tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
+ }
+ }
+)
+
+messages = [
+ SystemMessage(
+ content="You are a helpful assistant that im using to make a test request to."
+ ),
+ HumanMessage(
+ content="test from litellm. tell me why it's amazing in 1 sentence"
+ ),
+]
+response = chat(messages)
+
+print(response)
+```
+
+
+
+
+
+#### Viewing Spend per tag
+
+#### `/spend/tags` Request Format
+```shell
+curl -X GET "http://0.0.0.0:4000/spend/tags" \
+-H "Authorization: Bearer sk-1234"
+```
+
+#### `/spend/tags`Response Format
+```shell
+[
+ {
+ "individual_request_tag": "model-anthropic-claude-v2.1",
+ "log_count": 6,
+ "total_spend": 0.000672
+ },
+ {
+ "individual_request_tag": "app-ishaan-local",
+ "log_count": 4,
+ "total_spend": 0.000448
+ },
+ {
+ "individual_request_tag": "app-ishaan-prod",
+ "log_count": 2,
+ "total_spend": 0.000224
+ }
+]
+
+```
+
+
+
## Content Moderation
-### Content Moderation with LLM Guard
+#### Content Moderation with LLM Guard
Set the LLM Guard API Base in your environment
@@ -160,7 +343,7 @@ curl --location 'http://0.0.0.0:4000/v1/chat/completions' \
-### Content Moderation with LlamaGuard
+#### Content Moderation with LlamaGuard
Currently works with Sagemaker's LlamaGuard endpoint.
@@ -194,7 +377,7 @@ callbacks: ["llamaguard_moderations"]
-### Content Moderation with Google Text Moderation
+#### Content Moderation with Google Text Moderation
Requires your GOOGLE_APPLICATION_CREDENTIALS to be set in your .env (same as VertexAI).
@@ -250,7 +433,7 @@ Here are the category specific values:
-### Content Moderation with OpenAI Moderations
+#### Content Moderation with OpenAI Moderations
Use this if you want to reject /chat, /completions, /embeddings calls that fail OpenAI Moderations checks
@@ -302,6 +485,42 @@ curl --location 'http://localhost:4000/chat/completions' \
}'
```
+## Swagger Docs - Custom Routes + Branding
+
+:::info
+
+Requires a LiteLLM Enterprise key to use. Get a free 2-week license [here](https://forms.gle/sTDVprBs18M4V8Le8)
+
+:::
+
+Set LiteLLM Key in your environment
+
+```bash
+LITELLM_LICENSE=""
+```
+
+#### Customize Title + Description
+
+In your environment, set:
+
+```bash
+DOCS_TITLE="TotalGPT"
+DOCS_DESCRIPTION="Sample Company Description"
+```
+
+#### Customize Routes
+
+Hide admin routes from users.
+
+In your environment, set:
+
+```bash
+DOCS_FILTERED="True" # only shows openai routes to user
+```
+
+
+
+
## Enable Blocked User Lists
If any call is made to proxy with this user id, it'll be rejected - use this if you want to let users opt-out of ai features
@@ -417,176 +636,6 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
}
'
```
-## Tracking Spend for Custom Tags
-
-Requirements:
-
-- Virtual Keys & a database should be set up, see [virtual keys](https://docs.litellm.ai/docs/proxy/virtual_keys)
-
-### Usage - /chat/completions requests with request tags
-
-
-
-
-
-
-
-Set `extra_body={"metadata": { }}` to `metadata` you want to pass
-
-```python
-import openai
-client = openai.OpenAI(
- api_key="anything",
- base_url="http://0.0.0.0:4000"
-)
-
-# request sent to model set on litellm proxy, `litellm --model`
-response = client.chat.completions.create(
- model="gpt-3.5-turbo",
- messages = [
- {
- "role": "user",
- "content": "this is a test request, write a short poem"
- }
- ],
- extra_body={
- "metadata": {
- "tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
- }
- }
-)
-
-print(response)
-```
-
-
-
-
-Pass `metadata` as part of the request body
-
-```shell
-curl --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Content-Type: application/json' \
- --data '{
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- "metadata": {"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]}
-}'
-```
-
-
-
-```python
-from langchain.chat_models import ChatOpenAI
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- HumanMessagePromptTemplate,
- SystemMessagePromptTemplate,
-)
-from langchain.schema import HumanMessage, SystemMessage
-
-chat = ChatOpenAI(
- openai_api_base="http://0.0.0.0:4000",
- model = "gpt-3.5-turbo",
- temperature=0.1,
- extra_body={
- "metadata": {
- "tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
- }
- }
-)
-
-messages = [
- SystemMessage(
- content="You are a helpful assistant that im using to make a test request to."
- ),
- HumanMessage(
- content="test from litellm. tell me why it's amazing in 1 sentence"
- ),
-]
-response = chat(messages)
-
-print(response)
-```
-
-
-
-
-
-### Viewing Spend per tag
-
-#### `/spend/tags` Request Format
-```shell
-curl -X GET "http://0.0.0.0:4000/spend/tags" \
--H "Authorization: Bearer sk-1234"
-```
-
-#### `/spend/tags`Response Format
-```shell
-[
- {
- "individual_request_tag": "model-anthropic-claude-v2.1",
- "log_count": 6,
- "total_spend": 0.000672
- },
- {
- "individual_request_tag": "app-ishaan-local",
- "log_count": 4,
- "total_spend": 0.000448
- },
- {
- "individual_request_tag": "app-ishaan-prod",
- "log_count": 2,
- "total_spend": 0.000224
- }
-]
-
-```
-
-
-
-
-## Swagger Docs - Custom Routes + Branding
-
-:::info
-
-Requires a LiteLLM Enterprise key to use. Get a free 2-week license [here](https://forms.gle/sTDVprBs18M4V8Le8)
-
-:::
-
-Set LiteLLM Key in your environment
-
-```bash
-LITELLM_LICENSE=""
-```
-
-### Customize Title + Description
-
-In your environment, set:
-
-```bash
-DOCS_TITLE="TotalGPT"
-DOCS_DESCRIPTION="Sample Company Description"
-```
-
-### Customize Routes
-
-Hide admin routes from users.
-
-In your environment, set:
-
-```bash
-DOCS_FILTERED="True" # only shows openai routes to user
-```
-
-
## Public Model Hub
diff --git a/docs/my-website/sidebars.js b/docs/my-website/sidebars.js
index f5bdc945b..88ac5e6cd 100644
--- a/docs/my-website/sidebars.js
+++ b/docs/my-website/sidebars.js
@@ -36,6 +36,7 @@ const sidebars = {
label: "📖 All Endpoints (Swagger)",
href: "https://litellm-api.up.railway.app/",
},
+ "proxy/enterprise",
"proxy/demo",
"proxy/configs",
"proxy/reliability",
@@ -45,7 +46,6 @@ const sidebars = {
"proxy/customers",
"proxy/billing",
"proxy/user_keys",
- "proxy/enterprise",
"proxy/virtual_keys",
"proxy/alerting",
{
diff --git a/litellm/__init__.py b/litellm/__init__.py
index f67a252eb..9fb614396 100644
--- a/litellm/__init__.py
+++ b/litellm/__init__.py
@@ -60,6 +60,7 @@ _async_failure_callback: List[Callable] = (
pre_call_rules: List[Callable] = []
post_call_rules: List[Callable] = []
turn_off_message_logging: Optional[bool] = False
+store_audit_logs = False # Enterprise feature, allow users to see audit logs
## end of callbacks #############
email: Optional[str] = (
diff --git a/litellm/proxy/_types.py b/litellm/proxy/_types.py
index f54bee399..8a95f4e1d 100644
--- a/litellm/proxy/_types.py
+++ b/litellm/proxy/_types.py
@@ -76,6 +76,17 @@ class LitellmUserRoles(str, enum.Enum):
return ui_labels.get(self.value, "")
+class LitellmTableNames(str, enum.Enum):
+ """
+ Enum for Table Names used by LiteLLM
+ """
+
+ TEAM_TABLE_NAME: str = "LiteLLM_TeamTable"
+ USER_TABLE_NAME: str = "LiteLLM_UserTable"
+ KEY_TABLE_NAME: str = "LiteLLM_VerificationToken"
+ PROXY_MODEL_TABLE_NAME: str = "LiteLLM_ModelTable"
+
+
AlertType = Literal[
"llm_exceptions",
"llm_too_slow",
@@ -1276,6 +1287,22 @@ class LiteLLM_ErrorLogs(LiteLLMBase):
endTime: Union[str, datetime, None]
+class LiteLLM_AuditLogs(LiteLLMBase):
+ id: str
+ updated_at: datetime
+ changed_by: str
+ action: Literal["created", "updated", "deleted"]
+ table_name: Literal[
+ LitellmTableNames.TEAM_TABLE_NAME,
+ LitellmTableNames.USER_TABLE_NAME,
+ LitellmTableNames.KEY_TABLE_NAME,
+ LitellmTableNames.PROXY_MODEL_TABLE_NAME,
+ ]
+ object_id: str
+ before_value: Optional[Json] = None
+ updated_values: Optional[Json] = None
+
+
class LiteLLM_SpendLogs_ResponseObject(LiteLLMBase):
response: Optional[List[Union[LiteLLM_SpendLogs, Any]]] = None
diff --git a/litellm/proxy/proxy_config.yaml b/litellm/proxy/proxy_config.yaml
index e3d4effe8..88fc0e913 100644
--- a/litellm/proxy/proxy_config.yaml
+++ b/litellm/proxy/proxy_config.yaml
@@ -23,4 +23,5 @@ general_settings:
master_key: sk-1234
litellm_settings:
- callbacks: ["otel"]
\ No newline at end of file
+ callbacks: ["otel"]
+ store_audit_logs: true
\ No newline at end of file
diff --git a/litellm/proxy/proxy_server.py b/litellm/proxy/proxy_server.py
index 25035a016..8cf2fa118 100644
--- a/litellm/proxy/proxy_server.py
+++ b/litellm/proxy/proxy_server.py
@@ -7115,6 +7115,25 @@ async def generate_key_fn(
)
)
+ # Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
+ if litellm.store_audit_logs is True:
+ _updated_values = json.dumps(response)
+ asyncio.create_task(
+ create_audit_log_for_update(
+ request_data=LiteLLM_AuditLogs(
+ id=str(uuid.uuid4()),
+ updated_at=datetime.now(timezone.utc),
+ changed_by=user_api_key_dict.user_id
+ or litellm_proxy_admin_name,
+ table_name=LitellmTableNames.KEY_TABLE_NAME,
+ object_id=response.get("token_id", ""),
+ action="created",
+ updated_values=_updated_values,
+ before_value=None,
+ )
+ )
+ )
+
return GenerateKeyResponse(**response)
except Exception as e:
traceback.print_exc()
@@ -7138,7 +7157,11 @@ async def generate_key_fn(
@router.post(
"/key/update", tags=["key management"], dependencies=[Depends(user_api_key_auth)]
)
-async def update_key_fn(request: Request, data: UpdateKeyRequest):
+async def update_key_fn(
+ request: Request,
+ data: UpdateKeyRequest,
+ user_api_key_dict: UserAPIKeyAuth = Depends(user_api_key_auth),
+):
"""
Update an existing key
"""
@@ -7150,6 +7173,16 @@ async def update_key_fn(request: Request, data: UpdateKeyRequest):
if prisma_client is None:
raise Exception("Not connected to DB!")
+ existing_key_row = await prisma_client.get_data(
+ token=data.key, table_name="key", query_type="find_unique"
+ )
+
+ if existing_key_row is None:
+ raise HTTPException(
+ status_code=404,
+ detail={"error": f"Team not found, passed team_id={data.team_id}"},
+ )
+
# get non default values for key
non_default_values = {}
for k, v in data_json.items():
@@ -7176,6 +7209,29 @@ async def update_key_fn(request: Request, data: UpdateKeyRequest):
hashed_token = hash_token(key)
user_api_key_cache.delete_cache(hashed_token)
+ # Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
+ if litellm.store_audit_logs is True:
+ _updated_values = json.dumps(data_json)
+
+ _before_value = existing_key_row.json(exclude_none=True)
+ _before_value = json.dumps(_before_value)
+
+ asyncio.create_task(
+ create_audit_log_for_update(
+ request_data=LiteLLM_AuditLogs(
+ id=str(uuid.uuid4()),
+ updated_at=datetime.now(timezone.utc),
+ changed_by=user_api_key_dict.user_id
+ or litellm_proxy_admin_name,
+ table_name=LitellmTableNames.KEY_TABLE_NAME,
+ object_id=data.key,
+ action="updated",
+ updated_values=_updated_values,
+ before_value=_before_value,
+ )
+ )
+ )
+
return {"key": key, **response["data"]}
# update based on remaining passed in values
except Exception as e:
@@ -7238,6 +7294,34 @@ async def delete_key_fn(
):
user_id = None # unless they're admin
+ # Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
+ # we do this after the first for loop, since first for loop is for validation. we only want this inserted after validation passes
+ if litellm.store_audit_logs is True:
+ # make an audit log for each team deleted
+ for key in data.keys:
+ key_row = await prisma_client.get_data( # type: ignore
+ token=key, table_name="key", query_type="find_unique"
+ )
+
+ key_row = key_row.json(exclude_none=True)
+ _key_row = json.dumps(key_row)
+
+ asyncio.create_task(
+ create_audit_log_for_update(
+ request_data=LiteLLM_AuditLogs(
+ id=str(uuid.uuid4()),
+ updated_at=datetime.now(timezone.utc),
+ changed_by=user_api_key_dict.user_id
+ or litellm_proxy_admin_name,
+ table_name=LitellmTableNames.KEY_TABLE_NAME,
+ object_id=key,
+ action="deleted",
+ updated_values="{}",
+ before_value=_key_row,
+ )
+ )
+ )
+
number_deleted_keys = await delete_verification_token(
tokens=keys, user_id=user_id
)
@@ -10365,12 +10449,65 @@ async def new_team(
}
},
)
+
+ # Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
+ if litellm.store_audit_logs is True:
+ _updated_values = complete_team_data.json(exclude_none=True)
+ _updated_values = json.dumps(_updated_values)
+
+ asyncio.create_task(
+ create_audit_log_for_update(
+ request_data=LiteLLM_AuditLogs(
+ id=str(uuid.uuid4()),
+ updated_at=datetime.now(timezone.utc),
+ changed_by=user_api_key_dict.user_id or litellm_proxy_admin_name,
+ table_name=LitellmTableNames.TEAM_TABLE_NAME,
+ object_id=data.team_id,
+ action="created",
+ updated_values=_updated_values,
+ before_value=None,
+ )
+ )
+ )
+
try:
return team_row.model_dump()
except Exception as e:
return team_row.dict()
+async def create_audit_log_for_update(request_data: LiteLLM_AuditLogs):
+ if premium_user is not True:
+ return
+
+ if litellm.store_audit_logs is not True:
+ return
+ if prisma_client is None:
+ raise Exception("prisma_client is None, no DB connected")
+
+ verbose_proxy_logger.debug("creating audit log for %s", request_data)
+
+ if isinstance(request_data.updated_values, dict):
+ request_data.updated_values = json.dumps(request_data.updated_values)
+
+ if isinstance(request_data.before_value, dict):
+ request_data.before_value = json.dumps(request_data.before_value)
+
+ _request_data = request_data.dict(exclude_none=True)
+
+ try:
+ await prisma_client.db.litellm_auditlog.create(
+ data={
+ **_request_data, # type: ignore
+ }
+ )
+ except Exception as e:
+ # [Non-Blocking Exception. Do not allow blocking LLM API call]
+ verbose_proxy_logger.error(f"Failed Creating audit log {e}")
+
+ return
+
+
@router.post(
"/team/update", tags=["team management"], dependencies=[Depends(user_api_key_auth)]
)
@@ -10443,6 +10580,27 @@ async def update_team(
team_id=data.team_id,
)
+ # Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
+ if litellm.store_audit_logs is True:
+ _before_value = existing_team_row.json(exclude_none=True)
+ _before_value = json.dumps(_before_value)
+ _after_value: str = json.dumps(updated_kv)
+
+ asyncio.create_task(
+ create_audit_log_for_update(
+ request_data=LiteLLM_AuditLogs(
+ id=str(uuid.uuid4()),
+ updated_at=datetime.now(timezone.utc),
+ changed_by=user_api_key_dict.user_id or litellm_proxy_admin_name,
+ table_name=LitellmTableNames.TEAM_TABLE_NAME,
+ object_id=data.team_id,
+ action="updated",
+ updated_values=_after_value,
+ before_value=_before_value,
+ )
+ )
+ )
+
return team_row
@@ -10714,6 +10872,35 @@ async def delete_team(
detail={"error": f"Team not found, passed team_id={team_id}"},
)
+ # Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
+ # we do this after the first for loop, since first for loop is for validation. we only want this inserted after validation passes
+ if litellm.store_audit_logs is True:
+ # make an audit log for each team deleted
+ for team_id in data.team_ids:
+ team_row = await prisma_client.get_data( # type: ignore
+ team_id=team_id, table_name="team", query_type="find_unique"
+ )
+
+ _team_row = team_row.json(exclude_none=True)
+
+ asyncio.create_task(
+ create_audit_log_for_update(
+ request_data=LiteLLM_AuditLogs(
+ id=str(uuid.uuid4()),
+ updated_at=datetime.now(timezone.utc),
+ changed_by=user_api_key_dict.user_id
+ or litellm_proxy_admin_name,
+ table_name=LitellmTableNames.TEAM_TABLE_NAME,
+ object_id=team_id,
+ action="deleted",
+ updated_values="{}",
+ before_value=_team_row,
+ )
+ )
+ )
+
+ # End of Audit logging
+
## DELETE ASSOCIATED KEYS
await prisma_client.delete_data(team_id_list=data.team_ids, table_name="key")
## DELETE TEAMS
diff --git a/litellm/proxy/schema.prisma b/litellm/proxy/schema.prisma
index 243f06337..7cc688ee8 100644
--- a/litellm/proxy/schema.prisma
+++ b/litellm/proxy/schema.prisma
@@ -243,4 +243,16 @@ model LiteLLM_InvitationLink {
liteLLM_user_table_user LiteLLM_UserTable @relation("UserId", fields: [user_id], references: [user_id])
liteLLM_user_table_created LiteLLM_UserTable @relation("CreatedBy", fields: [created_by], references: [user_id])
liteLLM_user_table_updated LiteLLM_UserTable @relation("UpdatedBy", fields: [updated_by], references: [user_id])
+}
+
+
+model LiteLLM_AuditLog {
+ id String @id @default(uuid())
+ updated_at DateTime @default(now())
+ changed_by String // user or system that performed the action
+ action String // create, update, delete
+ table_name String // on of LitellmTableNames.TEAM_TABLE_NAME, LitellmTableNames.USER_TABLE_NAME, LitellmTableNames.PROXY_MODEL_TABLE_NAME,
+ object_id String // id of the object being audited. This can be the key id, team id, user id, model id
+ before_value Json? // value of the row
+ updated_values Json? // value of the row after change
}
\ No newline at end of file
diff --git a/schema.prisma b/schema.prisma
index 243f06337..7cc688ee8 100644
--- a/schema.prisma
+++ b/schema.prisma
@@ -243,4 +243,16 @@ model LiteLLM_InvitationLink {
liteLLM_user_table_user LiteLLM_UserTable @relation("UserId", fields: [user_id], references: [user_id])
liteLLM_user_table_created LiteLLM_UserTable @relation("CreatedBy", fields: [created_by], references: [user_id])
liteLLM_user_table_updated LiteLLM_UserTable @relation("UpdatedBy", fields: [updated_by], references: [user_id])
+}
+
+
+model LiteLLM_AuditLog {
+ id String @id @default(uuid())
+ updated_at DateTime @default(now())
+ changed_by String // user or system that performed the action
+ action String // create, update, delete
+ table_name String // on of LitellmTableNames.TEAM_TABLE_NAME, LitellmTableNames.USER_TABLE_NAME, LitellmTableNames.PROXY_MODEL_TABLE_NAME,
+ object_id String // id of the object being audited. This can be the key id, team id, user id, model id
+ before_value Json? // value of the row
+ updated_values Json? // value of the row after change
}
\ No newline at end of file