forked from phoenix/litellm-mirror
Merge pull request #4030 from BerriAI/litellm_audit_logs_premium
[Feat] Audit Logs for Key, User, ProxyModel CRUD operations
This commit is contained in:
commit
8e8509d848
9 changed files with 478 additions and 188 deletions
|
@ -10,6 +10,7 @@ For companies that need SSO, user management and professional support for LiteLL
|
||||||
This covers:
|
This covers:
|
||||||
- ✅ **Features under the [LiteLLM Commercial License (Content Mod, Custom Tags, etc.)](https://docs.litellm.ai/docs/proxy/enterprise)**
|
- ✅ **Features under the [LiteLLM Commercial License (Content Mod, Custom Tags, etc.)](https://docs.litellm.ai/docs/proxy/enterprise)**
|
||||||
- ✅ [**Secure UI access with Single Sign-On**](../docs/proxy/ui.md#setup-ssoauth-for-ui)
|
- ✅ [**Secure UI access with Single Sign-On**](../docs/proxy/ui.md#setup-ssoauth-for-ui)
|
||||||
|
- ✅ [**Audit Logs with retention policy**](../docs/proxy/enterprise.md#audit-logs)
|
||||||
- ✅ [**JWT-Auth**](../docs/proxy/token_auth.md)
|
- ✅ [**JWT-Auth**](../docs/proxy/token_auth.md)
|
||||||
- ✅ [**Prompt Injection Detection**](#prompt-injection-detection-lakeraai)
|
- ✅ [**Prompt Injection Detection**](#prompt-injection-detection-lakeraai)
|
||||||
- ✅ [**Invite Team Members to access `/spend` Routes**](../docs/proxy/cost_tracking#allowing-non-proxy-admins-to-access-spend-endpoints)
|
- ✅ [**Invite Team Members to access `/spend` Routes**](../docs/proxy/cost_tracking#allowing-non-proxy-admins-to-access-spend-endpoints)
|
||||||
|
|
|
@ -2,30 +2,213 @@ import Image from '@theme/IdealImage';
|
||||||
import Tabs from '@theme/Tabs';
|
import Tabs from '@theme/Tabs';
|
||||||
import TabItem from '@theme/TabItem';
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
# ✨ Enterprise Features - Content Mod, SSO, Custom Swagger
|
# ✨ Enterprise Features - SSO, Audit Logs, Guardrails
|
||||||
|
|
||||||
Features here are behind a commercial license in our `/enterprise` folder. [**See Code**](https://github.com/BerriAI/litellm/tree/main/enterprise)
|
:::tip
|
||||||
|
|
||||||
:::info
|
Get in touch with us [here](https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat)
|
||||||
|
|
||||||
[Get Started with Enterprise here](https://github.com/BerriAI/litellm/tree/main/enterprise)
|
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
Features:
|
Features:
|
||||||
- ✅ [SSO for Admin UI](./ui.md#✨-enterprise-features)
|
- ✅ [SSO for Admin UI](./ui.md#✨-enterprise-features)
|
||||||
- ✅ Content Moderation with LLM Guard, LlamaGuard, Google Text Moderations
|
- ✅ [Audit Logs](#audit-logs)
|
||||||
- ✅ [Prompt Injection Detection (with LakeraAI API)](#prompt-injection-detection-lakeraai)
|
- ✅ [Tracking Spend for Custom Tags](#tracking-spend-for-custom-tags)
|
||||||
|
- ✅ [Content Moderation with LLM Guard, LlamaGuard, Google Text Moderations](#content-moderation)
|
||||||
|
- ✅ [Prompt Injection Detection (with LakeraAI API)](#prompt-injection-detection---lakeraai)
|
||||||
|
- ✅ [Custom Branding + Routes on Swagger Docs](#swagger-docs---custom-routes--branding)
|
||||||
- ✅ Reject calls from Blocked User list
|
- ✅ Reject calls from Blocked User list
|
||||||
- ✅ Reject calls (incoming / outgoing) with Banned Keywords (e.g. competitors)
|
- ✅ Reject calls (incoming / outgoing) with Banned Keywords (e.g. competitors)
|
||||||
- ✅ Don't log/store specific requests to Langfuse, Sentry, etc. (eg confidential LLM requests)
|
|
||||||
- ✅ Tracking Spend for Custom Tags
|
## Audit Logs
|
||||||
- ✅ Custom Branding + Routes on Swagger Docs
|
|
||||||
- ✅ Audit Logs for `Created At, Created By` when Models Added
|
Store Audit logs for **Create, Update Delete Operations** done on `Teams` and `Virtual Keys`
|
||||||
|
|
||||||
|
**Step 1** Switch on audit Logs
|
||||||
|
```shell
|
||||||
|
litellm_settings:
|
||||||
|
store_audit_logs: true
|
||||||
|
```
|
||||||
|
|
||||||
|
Start the litellm proxy with this config
|
||||||
|
|
||||||
|
**Step 2** Test it - Create a Team
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl --location 'http://0.0.0.0:4000/team/new' \
|
||||||
|
--header 'Authorization: Bearer sk-1234' \
|
||||||
|
--header 'Content-Type: application/json' \
|
||||||
|
--data '{
|
||||||
|
"max_budget": 2
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3** Expected Log
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "e1760e10-4264-4499-82cd-c08c86c8d05b",
|
||||||
|
"updated_at": "2024-06-06T02:10:40.836420+00:00",
|
||||||
|
"changed_by": "109010464461339474872",
|
||||||
|
"action": "created",
|
||||||
|
"table_name": "LiteLLM_TeamTable",
|
||||||
|
"object_id": "82e725b5-053f-459d-9a52-867191635446",
|
||||||
|
"before_value": null,
|
||||||
|
"updated_values": {
|
||||||
|
"team_id": "82e725b5-053f-459d-9a52-867191635446",
|
||||||
|
"admins": [],
|
||||||
|
"members": [],
|
||||||
|
"members_with_roles": [
|
||||||
|
{
|
||||||
|
"role": "admin",
|
||||||
|
"user_id": "109010464461339474872"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"max_budget": 2.0,
|
||||||
|
"models": [],
|
||||||
|
"blocked": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Tracking Spend for Custom Tags
|
||||||
|
|
||||||
|
Requirements:
|
||||||
|
|
||||||
|
- Virtual Keys & a database should be set up, see [virtual keys](https://docs.litellm.ai/docs/proxy/virtual_keys)
|
||||||
|
|
||||||
|
#### Usage - /chat/completions requests with request tags
|
||||||
|
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
|
||||||
|
|
||||||
|
<TabItem value="openai" label="OpenAI Python v1.0.0+">
|
||||||
|
|
||||||
|
Set `extra_body={"metadata": { }}` to `metadata` you want to pass
|
||||||
|
|
||||||
|
```python
|
||||||
|
import openai
|
||||||
|
client = openai.OpenAI(
|
||||||
|
api_key="anything",
|
||||||
|
base_url="http://0.0.0.0:4000"
|
||||||
|
)
|
||||||
|
|
||||||
|
# request sent to model set on litellm proxy, `litellm --model`
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model="gpt-3.5-turbo",
|
||||||
|
messages = [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "this is a test request, write a short poem"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
extra_body={
|
||||||
|
"metadata": {
|
||||||
|
"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
print(response)
|
||||||
|
```
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Curl" label="Curl Request">
|
||||||
|
|
||||||
|
Pass `metadata` as part of the request body
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl --location 'http://0.0.0.0:4000/chat/completions' \
|
||||||
|
--header 'Content-Type: application/json' \
|
||||||
|
--data '{
|
||||||
|
"model": "gpt-3.5-turbo",
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "what llm are you"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="langchain" label="Langchain">
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.chat_models import ChatOpenAI
|
||||||
|
from langchain.prompts.chat import (
|
||||||
|
ChatPromptTemplate,
|
||||||
|
HumanMessagePromptTemplate,
|
||||||
|
SystemMessagePromptTemplate,
|
||||||
|
)
|
||||||
|
from langchain.schema import HumanMessage, SystemMessage
|
||||||
|
|
||||||
|
chat = ChatOpenAI(
|
||||||
|
openai_api_base="http://0.0.0.0:4000",
|
||||||
|
model = "gpt-3.5-turbo",
|
||||||
|
temperature=0.1,
|
||||||
|
extra_body={
|
||||||
|
"metadata": {
|
||||||
|
"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
SystemMessage(
|
||||||
|
content="You are a helpful assistant that im using to make a test request to."
|
||||||
|
),
|
||||||
|
HumanMessage(
|
||||||
|
content="test from litellm. tell me why it's amazing in 1 sentence"
|
||||||
|
),
|
||||||
|
]
|
||||||
|
response = chat(messages)
|
||||||
|
|
||||||
|
print(response)
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
|
||||||
|
#### Viewing Spend per tag
|
||||||
|
|
||||||
|
#### `/spend/tags` Request Format
|
||||||
|
```shell
|
||||||
|
curl -X GET "http://0.0.0.0:4000/spend/tags" \
|
||||||
|
-H "Authorization: Bearer sk-1234"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `/spend/tags`Response Format
|
||||||
|
```shell
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"individual_request_tag": "model-anthropic-claude-v2.1",
|
||||||
|
"log_count": 6,
|
||||||
|
"total_spend": 0.000672
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"individual_request_tag": "app-ishaan-local",
|
||||||
|
"log_count": 4,
|
||||||
|
"total_spend": 0.000448
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"individual_request_tag": "app-ishaan-prod",
|
||||||
|
"log_count": 2,
|
||||||
|
"total_spend": 0.000224
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Content Moderation
|
## Content Moderation
|
||||||
### Content Moderation with LLM Guard
|
#### Content Moderation with LLM Guard
|
||||||
|
|
||||||
Set the LLM Guard API Base in your environment
|
Set the LLM Guard API Base in your environment
|
||||||
|
|
||||||
|
@ -160,7 +343,7 @@ curl --location 'http://0.0.0.0:4000/v1/chat/completions' \
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
### Content Moderation with LlamaGuard
|
#### Content Moderation with LlamaGuard
|
||||||
|
|
||||||
Currently works with Sagemaker's LlamaGuard endpoint.
|
Currently works with Sagemaker's LlamaGuard endpoint.
|
||||||
|
|
||||||
|
@ -194,7 +377,7 @@ callbacks: ["llamaguard_moderations"]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Content Moderation with Google Text Moderation
|
#### Content Moderation with Google Text Moderation
|
||||||
|
|
||||||
Requires your GOOGLE_APPLICATION_CREDENTIALS to be set in your .env (same as VertexAI).
|
Requires your GOOGLE_APPLICATION_CREDENTIALS to be set in your .env (same as VertexAI).
|
||||||
|
|
||||||
|
@ -250,7 +433,7 @@ Here are the category specific values:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Content Moderation with OpenAI Moderations
|
#### Content Moderation with OpenAI Moderations
|
||||||
|
|
||||||
Use this if you want to reject /chat, /completions, /embeddings calls that fail OpenAI Moderations checks
|
Use this if you want to reject /chat, /completions, /embeddings calls that fail OpenAI Moderations checks
|
||||||
|
|
||||||
|
@ -302,6 +485,42 @@ curl --location 'http://localhost:4000/chat/completions' \
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Swagger Docs - Custom Routes + Branding
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
|
Requires a LiteLLM Enterprise key to use. Get a free 2-week license [here](https://forms.gle/sTDVprBs18M4V8Le8)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
Set LiteLLM Key in your environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
LITELLM_LICENSE=""
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Customize Title + Description
|
||||||
|
|
||||||
|
In your environment, set:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
DOCS_TITLE="TotalGPT"
|
||||||
|
DOCS_DESCRIPTION="Sample Company Description"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Customize Routes
|
||||||
|
|
||||||
|
Hide admin routes from users.
|
||||||
|
|
||||||
|
In your environment, set:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
DOCS_FILTERED="True" # only shows openai routes to user
|
||||||
|
```
|
||||||
|
|
||||||
|
<Image img={require('../../img/custom_swagger.png')} style={{ width: '900px', height: 'auto' }} />
|
||||||
|
|
||||||
|
|
||||||
## Enable Blocked User Lists
|
## Enable Blocked User Lists
|
||||||
If any call is made to proxy with this user id, it'll be rejected - use this if you want to let users opt-out of ai features
|
If any call is made to proxy with this user id, it'll be rejected - use this if you want to let users opt-out of ai features
|
||||||
|
|
||||||
|
@ -417,176 +636,6 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
|
||||||
}
|
}
|
||||||
'
|
'
|
||||||
```
|
```
|
||||||
## Tracking Spend for Custom Tags
|
|
||||||
|
|
||||||
Requirements:
|
|
||||||
|
|
||||||
- Virtual Keys & a database should be set up, see [virtual keys](https://docs.litellm.ai/docs/proxy/virtual_keys)
|
|
||||||
|
|
||||||
### Usage - /chat/completions requests with request tags
|
|
||||||
|
|
||||||
|
|
||||||
<Tabs>
|
|
||||||
|
|
||||||
|
|
||||||
<TabItem value="openai" label="OpenAI Python v1.0.0+">
|
|
||||||
|
|
||||||
Set `extra_body={"metadata": { }}` to `metadata` you want to pass
|
|
||||||
|
|
||||||
```python
|
|
||||||
import openai
|
|
||||||
client = openai.OpenAI(
|
|
||||||
api_key="anything",
|
|
||||||
base_url="http://0.0.0.0:4000"
|
|
||||||
)
|
|
||||||
|
|
||||||
# request sent to model set on litellm proxy, `litellm --model`
|
|
||||||
response = client.chat.completions.create(
|
|
||||||
model="gpt-3.5-turbo",
|
|
||||||
messages = [
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "this is a test request, write a short poem"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
extra_body={
|
|
||||||
"metadata": {
|
|
||||||
"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
print(response)
|
|
||||||
```
|
|
||||||
</TabItem>
|
|
||||||
|
|
||||||
<TabItem value="Curl" label="Curl Request">
|
|
||||||
|
|
||||||
Pass `metadata` as part of the request body
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl --location 'http://0.0.0.0:4000/chat/completions' \
|
|
||||||
--header 'Content-Type: application/json' \
|
|
||||||
--data '{
|
|
||||||
"model": "gpt-3.5-turbo",
|
|
||||||
"messages": [
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "what llm are you"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]}
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
</TabItem>
|
|
||||||
<TabItem value="langchain" label="Langchain">
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain.chat_models import ChatOpenAI
|
|
||||||
from langchain.prompts.chat import (
|
|
||||||
ChatPromptTemplate,
|
|
||||||
HumanMessagePromptTemplate,
|
|
||||||
SystemMessagePromptTemplate,
|
|
||||||
)
|
|
||||||
from langchain.schema import HumanMessage, SystemMessage
|
|
||||||
|
|
||||||
chat = ChatOpenAI(
|
|
||||||
openai_api_base="http://0.0.0.0:4000",
|
|
||||||
model = "gpt-3.5-turbo",
|
|
||||||
temperature=0.1,
|
|
||||||
extra_body={
|
|
||||||
"metadata": {
|
|
||||||
"tags": ["model-anthropic-claude-v2.1", "app-ishaan-prod"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
SystemMessage(
|
|
||||||
content="You are a helpful assistant that im using to make a test request to."
|
|
||||||
),
|
|
||||||
HumanMessage(
|
|
||||||
content="test from litellm. tell me why it's amazing in 1 sentence"
|
|
||||||
),
|
|
||||||
]
|
|
||||||
response = chat(messages)
|
|
||||||
|
|
||||||
print(response)
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
</Tabs>
|
|
||||||
|
|
||||||
|
|
||||||
### Viewing Spend per tag
|
|
||||||
|
|
||||||
#### `/spend/tags` Request Format
|
|
||||||
```shell
|
|
||||||
curl -X GET "http://0.0.0.0:4000/spend/tags" \
|
|
||||||
-H "Authorization: Bearer sk-1234"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `/spend/tags`Response Format
|
|
||||||
```shell
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"individual_request_tag": "model-anthropic-claude-v2.1",
|
|
||||||
"log_count": 6,
|
|
||||||
"total_spend": 0.000672
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"individual_request_tag": "app-ishaan-local",
|
|
||||||
"log_count": 4,
|
|
||||||
"total_spend": 0.000448
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"individual_request_tag": "app-ishaan-prod",
|
|
||||||
"log_count": 2,
|
|
||||||
"total_spend": 0.000224
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
<!-- ## Tracking Spend per Key
|
|
||||||
|
|
||||||
## Tracking Spend per User -->
|
|
||||||
|
|
||||||
## Swagger Docs - Custom Routes + Branding
|
|
||||||
|
|
||||||
:::info
|
|
||||||
|
|
||||||
Requires a LiteLLM Enterprise key to use. Get a free 2-week license [here](https://forms.gle/sTDVprBs18M4V8Le8)
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
Set LiteLLM Key in your environment
|
|
||||||
|
|
||||||
```bash
|
|
||||||
LITELLM_LICENSE=""
|
|
||||||
```
|
|
||||||
|
|
||||||
### Customize Title + Description
|
|
||||||
|
|
||||||
In your environment, set:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
DOCS_TITLE="TotalGPT"
|
|
||||||
DOCS_DESCRIPTION="Sample Company Description"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Customize Routes
|
|
||||||
|
|
||||||
Hide admin routes from users.
|
|
||||||
|
|
||||||
In your environment, set:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
DOCS_FILTERED="True" # only shows openai routes to user
|
|
||||||
```
|
|
||||||
|
|
||||||
<Image img={require('../../img/custom_swagger.png')} style={{ width: '900px', height: 'auto' }} />
|
|
||||||
|
|
||||||
## Public Model Hub
|
## Public Model Hub
|
||||||
|
|
||||||
|
|
|
@ -36,6 +36,7 @@ const sidebars = {
|
||||||
label: "📖 All Endpoints (Swagger)",
|
label: "📖 All Endpoints (Swagger)",
|
||||||
href: "https://litellm-api.up.railway.app/",
|
href: "https://litellm-api.up.railway.app/",
|
||||||
},
|
},
|
||||||
|
"proxy/enterprise",
|
||||||
"proxy/demo",
|
"proxy/demo",
|
||||||
"proxy/configs",
|
"proxy/configs",
|
||||||
"proxy/reliability",
|
"proxy/reliability",
|
||||||
|
@ -45,7 +46,6 @@ const sidebars = {
|
||||||
"proxy/customers",
|
"proxy/customers",
|
||||||
"proxy/billing",
|
"proxy/billing",
|
||||||
"proxy/user_keys",
|
"proxy/user_keys",
|
||||||
"proxy/enterprise",
|
|
||||||
"proxy/virtual_keys",
|
"proxy/virtual_keys",
|
||||||
"proxy/alerting",
|
"proxy/alerting",
|
||||||
{
|
{
|
||||||
|
|
|
@ -60,6 +60,7 @@ _async_failure_callback: List[Callable] = (
|
||||||
pre_call_rules: List[Callable] = []
|
pre_call_rules: List[Callable] = []
|
||||||
post_call_rules: List[Callable] = []
|
post_call_rules: List[Callable] = []
|
||||||
turn_off_message_logging: Optional[bool] = False
|
turn_off_message_logging: Optional[bool] = False
|
||||||
|
store_audit_logs = False # Enterprise feature, allow users to see audit logs
|
||||||
## end of callbacks #############
|
## end of callbacks #############
|
||||||
|
|
||||||
email: Optional[str] = (
|
email: Optional[str] = (
|
||||||
|
|
|
@ -76,6 +76,17 @@ class LitellmUserRoles(str, enum.Enum):
|
||||||
return ui_labels.get(self.value, "")
|
return ui_labels.get(self.value, "")
|
||||||
|
|
||||||
|
|
||||||
|
class LitellmTableNames(str, enum.Enum):
|
||||||
|
"""
|
||||||
|
Enum for Table Names used by LiteLLM
|
||||||
|
"""
|
||||||
|
|
||||||
|
TEAM_TABLE_NAME: str = "LiteLLM_TeamTable"
|
||||||
|
USER_TABLE_NAME: str = "LiteLLM_UserTable"
|
||||||
|
KEY_TABLE_NAME: str = "LiteLLM_VerificationToken"
|
||||||
|
PROXY_MODEL_TABLE_NAME: str = "LiteLLM_ModelTable"
|
||||||
|
|
||||||
|
|
||||||
AlertType = Literal[
|
AlertType = Literal[
|
||||||
"llm_exceptions",
|
"llm_exceptions",
|
||||||
"llm_too_slow",
|
"llm_too_slow",
|
||||||
|
@ -1276,6 +1287,22 @@ class LiteLLM_ErrorLogs(LiteLLMBase):
|
||||||
endTime: Union[str, datetime, None]
|
endTime: Union[str, datetime, None]
|
||||||
|
|
||||||
|
|
||||||
|
class LiteLLM_AuditLogs(LiteLLMBase):
|
||||||
|
id: str
|
||||||
|
updated_at: datetime
|
||||||
|
changed_by: str
|
||||||
|
action: Literal["created", "updated", "deleted"]
|
||||||
|
table_name: Literal[
|
||||||
|
LitellmTableNames.TEAM_TABLE_NAME,
|
||||||
|
LitellmTableNames.USER_TABLE_NAME,
|
||||||
|
LitellmTableNames.KEY_TABLE_NAME,
|
||||||
|
LitellmTableNames.PROXY_MODEL_TABLE_NAME,
|
||||||
|
]
|
||||||
|
object_id: str
|
||||||
|
before_value: Optional[Json] = None
|
||||||
|
updated_values: Optional[Json] = None
|
||||||
|
|
||||||
|
|
||||||
class LiteLLM_SpendLogs_ResponseObject(LiteLLMBase):
|
class LiteLLM_SpendLogs_ResponseObject(LiteLLMBase):
|
||||||
response: Optional[List[Union[LiteLLM_SpendLogs, Any]]] = None
|
response: Optional[List[Union[LiteLLM_SpendLogs, Any]]] = None
|
||||||
|
|
||||||
|
|
|
@ -24,3 +24,4 @@ general_settings:
|
||||||
|
|
||||||
litellm_settings:
|
litellm_settings:
|
||||||
callbacks: ["otel"]
|
callbacks: ["otel"]
|
||||||
|
store_audit_logs: true
|
|
@ -7115,6 +7115,25 @@ async def generate_key_fn(
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
|
||||||
|
if litellm.store_audit_logs is True:
|
||||||
|
_updated_values = json.dumps(response)
|
||||||
|
asyncio.create_task(
|
||||||
|
create_audit_log_for_update(
|
||||||
|
request_data=LiteLLM_AuditLogs(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
updated_at=datetime.now(timezone.utc),
|
||||||
|
changed_by=user_api_key_dict.user_id
|
||||||
|
or litellm_proxy_admin_name,
|
||||||
|
table_name=LitellmTableNames.KEY_TABLE_NAME,
|
||||||
|
object_id=response.get("token_id", ""),
|
||||||
|
action="created",
|
||||||
|
updated_values=_updated_values,
|
||||||
|
before_value=None,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
return GenerateKeyResponse(**response)
|
return GenerateKeyResponse(**response)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
|
@ -7138,7 +7157,11 @@ async def generate_key_fn(
|
||||||
@router.post(
|
@router.post(
|
||||||
"/key/update", tags=["key management"], dependencies=[Depends(user_api_key_auth)]
|
"/key/update", tags=["key management"], dependencies=[Depends(user_api_key_auth)]
|
||||||
)
|
)
|
||||||
async def update_key_fn(request: Request, data: UpdateKeyRequest):
|
async def update_key_fn(
|
||||||
|
request: Request,
|
||||||
|
data: UpdateKeyRequest,
|
||||||
|
user_api_key_dict: UserAPIKeyAuth = Depends(user_api_key_auth),
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Update an existing key
|
Update an existing key
|
||||||
"""
|
"""
|
||||||
|
@ -7150,6 +7173,16 @@ async def update_key_fn(request: Request, data: UpdateKeyRequest):
|
||||||
if prisma_client is None:
|
if prisma_client is None:
|
||||||
raise Exception("Not connected to DB!")
|
raise Exception("Not connected to DB!")
|
||||||
|
|
||||||
|
existing_key_row = await prisma_client.get_data(
|
||||||
|
token=data.key, table_name="key", query_type="find_unique"
|
||||||
|
)
|
||||||
|
|
||||||
|
if existing_key_row is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail={"error": f"Team not found, passed team_id={data.team_id}"},
|
||||||
|
)
|
||||||
|
|
||||||
# get non default values for key
|
# get non default values for key
|
||||||
non_default_values = {}
|
non_default_values = {}
|
||||||
for k, v in data_json.items():
|
for k, v in data_json.items():
|
||||||
|
@ -7176,6 +7209,29 @@ async def update_key_fn(request: Request, data: UpdateKeyRequest):
|
||||||
hashed_token = hash_token(key)
|
hashed_token = hash_token(key)
|
||||||
user_api_key_cache.delete_cache(hashed_token)
|
user_api_key_cache.delete_cache(hashed_token)
|
||||||
|
|
||||||
|
# Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
|
||||||
|
if litellm.store_audit_logs is True:
|
||||||
|
_updated_values = json.dumps(data_json)
|
||||||
|
|
||||||
|
_before_value = existing_key_row.json(exclude_none=True)
|
||||||
|
_before_value = json.dumps(_before_value)
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
create_audit_log_for_update(
|
||||||
|
request_data=LiteLLM_AuditLogs(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
updated_at=datetime.now(timezone.utc),
|
||||||
|
changed_by=user_api_key_dict.user_id
|
||||||
|
or litellm_proxy_admin_name,
|
||||||
|
table_name=LitellmTableNames.KEY_TABLE_NAME,
|
||||||
|
object_id=data.key,
|
||||||
|
action="updated",
|
||||||
|
updated_values=_updated_values,
|
||||||
|
before_value=_before_value,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
return {"key": key, **response["data"]}
|
return {"key": key, **response["data"]}
|
||||||
# update based on remaining passed in values
|
# update based on remaining passed in values
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
@ -7238,6 +7294,34 @@ async def delete_key_fn(
|
||||||
):
|
):
|
||||||
user_id = None # unless they're admin
|
user_id = None # unless they're admin
|
||||||
|
|
||||||
|
# Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
|
||||||
|
# we do this after the first for loop, since first for loop is for validation. we only want this inserted after validation passes
|
||||||
|
if litellm.store_audit_logs is True:
|
||||||
|
# make an audit log for each team deleted
|
||||||
|
for key in data.keys:
|
||||||
|
key_row = await prisma_client.get_data( # type: ignore
|
||||||
|
token=key, table_name="key", query_type="find_unique"
|
||||||
|
)
|
||||||
|
|
||||||
|
key_row = key_row.json(exclude_none=True)
|
||||||
|
_key_row = json.dumps(key_row)
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
create_audit_log_for_update(
|
||||||
|
request_data=LiteLLM_AuditLogs(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
updated_at=datetime.now(timezone.utc),
|
||||||
|
changed_by=user_api_key_dict.user_id
|
||||||
|
or litellm_proxy_admin_name,
|
||||||
|
table_name=LitellmTableNames.KEY_TABLE_NAME,
|
||||||
|
object_id=key,
|
||||||
|
action="deleted",
|
||||||
|
updated_values="{}",
|
||||||
|
before_value=_key_row,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
number_deleted_keys = await delete_verification_token(
|
number_deleted_keys = await delete_verification_token(
|
||||||
tokens=keys, user_id=user_id
|
tokens=keys, user_id=user_id
|
||||||
)
|
)
|
||||||
|
@ -10365,12 +10449,65 @@ async def new_team(
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
|
||||||
|
if litellm.store_audit_logs is True:
|
||||||
|
_updated_values = complete_team_data.json(exclude_none=True)
|
||||||
|
_updated_values = json.dumps(_updated_values)
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
create_audit_log_for_update(
|
||||||
|
request_data=LiteLLM_AuditLogs(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
updated_at=datetime.now(timezone.utc),
|
||||||
|
changed_by=user_api_key_dict.user_id or litellm_proxy_admin_name,
|
||||||
|
table_name=LitellmTableNames.TEAM_TABLE_NAME,
|
||||||
|
object_id=data.team_id,
|
||||||
|
action="created",
|
||||||
|
updated_values=_updated_values,
|
||||||
|
before_value=None,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
return team_row.model_dump()
|
return team_row.model_dump()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return team_row.dict()
|
return team_row.dict()
|
||||||
|
|
||||||
|
|
||||||
|
async def create_audit_log_for_update(request_data: LiteLLM_AuditLogs):
|
||||||
|
if premium_user is not True:
|
||||||
|
return
|
||||||
|
|
||||||
|
if litellm.store_audit_logs is not True:
|
||||||
|
return
|
||||||
|
if prisma_client is None:
|
||||||
|
raise Exception("prisma_client is None, no DB connected")
|
||||||
|
|
||||||
|
verbose_proxy_logger.debug("creating audit log for %s", request_data)
|
||||||
|
|
||||||
|
if isinstance(request_data.updated_values, dict):
|
||||||
|
request_data.updated_values = json.dumps(request_data.updated_values)
|
||||||
|
|
||||||
|
if isinstance(request_data.before_value, dict):
|
||||||
|
request_data.before_value = json.dumps(request_data.before_value)
|
||||||
|
|
||||||
|
_request_data = request_data.dict(exclude_none=True)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await prisma_client.db.litellm_auditlog.create(
|
||||||
|
data={
|
||||||
|
**_request_data, # type: ignore
|
||||||
|
}
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
# [Non-Blocking Exception. Do not allow blocking LLM API call]
|
||||||
|
verbose_proxy_logger.error(f"Failed Creating audit log {e}")
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
@router.post(
|
@router.post(
|
||||||
"/team/update", tags=["team management"], dependencies=[Depends(user_api_key_auth)]
|
"/team/update", tags=["team management"], dependencies=[Depends(user_api_key_auth)]
|
||||||
)
|
)
|
||||||
|
@ -10443,6 +10580,27 @@ async def update_team(
|
||||||
team_id=data.team_id,
|
team_id=data.team_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
|
||||||
|
if litellm.store_audit_logs is True:
|
||||||
|
_before_value = existing_team_row.json(exclude_none=True)
|
||||||
|
_before_value = json.dumps(_before_value)
|
||||||
|
_after_value: str = json.dumps(updated_kv)
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
create_audit_log_for_update(
|
||||||
|
request_data=LiteLLM_AuditLogs(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
updated_at=datetime.now(timezone.utc),
|
||||||
|
changed_by=user_api_key_dict.user_id or litellm_proxy_admin_name,
|
||||||
|
table_name=LitellmTableNames.TEAM_TABLE_NAME,
|
||||||
|
object_id=data.team_id,
|
||||||
|
action="updated",
|
||||||
|
updated_values=_after_value,
|
||||||
|
before_value=_before_value,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
return team_row
|
return team_row
|
||||||
|
|
||||||
|
|
||||||
|
@ -10714,6 +10872,35 @@ async def delete_team(
|
||||||
detail={"error": f"Team not found, passed team_id={team_id}"},
|
detail={"error": f"Team not found, passed team_id={team_id}"},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Enterprise Feature - Audit Logging. Enable with litellm.store_audit_logs = True
|
||||||
|
# we do this after the first for loop, since first for loop is for validation. we only want this inserted after validation passes
|
||||||
|
if litellm.store_audit_logs is True:
|
||||||
|
# make an audit log for each team deleted
|
||||||
|
for team_id in data.team_ids:
|
||||||
|
team_row = await prisma_client.get_data( # type: ignore
|
||||||
|
team_id=team_id, table_name="team", query_type="find_unique"
|
||||||
|
)
|
||||||
|
|
||||||
|
_team_row = team_row.json(exclude_none=True)
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
create_audit_log_for_update(
|
||||||
|
request_data=LiteLLM_AuditLogs(
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
updated_at=datetime.now(timezone.utc),
|
||||||
|
changed_by=user_api_key_dict.user_id
|
||||||
|
or litellm_proxy_admin_name,
|
||||||
|
table_name=LitellmTableNames.TEAM_TABLE_NAME,
|
||||||
|
object_id=team_id,
|
||||||
|
action="deleted",
|
||||||
|
updated_values="{}",
|
||||||
|
before_value=_team_row,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# End of Audit logging
|
||||||
|
|
||||||
## DELETE ASSOCIATED KEYS
|
## DELETE ASSOCIATED KEYS
|
||||||
await prisma_client.delete_data(team_id_list=data.team_ids, table_name="key")
|
await prisma_client.delete_data(team_id_list=data.team_ids, table_name="key")
|
||||||
## DELETE TEAMS
|
## DELETE TEAMS
|
||||||
|
|
|
@ -244,3 +244,15 @@ model LiteLLM_InvitationLink {
|
||||||
liteLLM_user_table_created LiteLLM_UserTable @relation("CreatedBy", fields: [created_by], references: [user_id])
|
liteLLM_user_table_created LiteLLM_UserTable @relation("CreatedBy", fields: [created_by], references: [user_id])
|
||||||
liteLLM_user_table_updated LiteLLM_UserTable @relation("UpdatedBy", fields: [updated_by], references: [user_id])
|
liteLLM_user_table_updated LiteLLM_UserTable @relation("UpdatedBy", fields: [updated_by], references: [user_id])
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
model LiteLLM_AuditLog {
|
||||||
|
id String @id @default(uuid())
|
||||||
|
updated_at DateTime @default(now())
|
||||||
|
changed_by String // user or system that performed the action
|
||||||
|
action String // create, update, delete
|
||||||
|
table_name String // on of LitellmTableNames.TEAM_TABLE_NAME, LitellmTableNames.USER_TABLE_NAME, LitellmTableNames.PROXY_MODEL_TABLE_NAME,
|
||||||
|
object_id String // id of the object being audited. This can be the key id, team id, user id, model id
|
||||||
|
before_value Json? // value of the row
|
||||||
|
updated_values Json? // value of the row after change
|
||||||
|
}
|
|
@ -244,3 +244,15 @@ model LiteLLM_InvitationLink {
|
||||||
liteLLM_user_table_created LiteLLM_UserTable @relation("CreatedBy", fields: [created_by], references: [user_id])
|
liteLLM_user_table_created LiteLLM_UserTable @relation("CreatedBy", fields: [created_by], references: [user_id])
|
||||||
liteLLM_user_table_updated LiteLLM_UserTable @relation("UpdatedBy", fields: [updated_by], references: [user_id])
|
liteLLM_user_table_updated LiteLLM_UserTable @relation("UpdatedBy", fields: [updated_by], references: [user_id])
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
model LiteLLM_AuditLog {
|
||||||
|
id String @id @default(uuid())
|
||||||
|
updated_at DateTime @default(now())
|
||||||
|
changed_by String // user or system that performed the action
|
||||||
|
action String // create, update, delete
|
||||||
|
table_name String // on of LitellmTableNames.TEAM_TABLE_NAME, LitellmTableNames.USER_TABLE_NAME, LitellmTableNames.PROXY_MODEL_TABLE_NAME,
|
||||||
|
object_id String // id of the object being audited. This can be the key id, team id, user id, model id
|
||||||
|
before_value Json? // value of the row
|
||||||
|
updated_values Json? // value of the row after change
|
||||||
|
}
|
Loading…
Add table
Add a link
Reference in a new issue