mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 19:24:27 +00:00
docs fix order of logging integrations
This commit is contained in:
parent
c546dc83c2
commit
ffde1d75d5
1 changed files with 105 additions and 103 deletions
|
@ -2,11 +2,11 @@
|
||||||
|
|
||||||
Log Proxy input, output, and exceptions using:
|
Log Proxy input, output, and exceptions using:
|
||||||
|
|
||||||
- Lunary
|
|
||||||
- MLflow
|
|
||||||
- Langfuse
|
- Langfuse
|
||||||
- OpenTelemetry
|
- OpenTelemetry
|
||||||
- GCS, s3, Azure (Blob) Buckets
|
- GCS, s3, Azure (Blob) Buckets
|
||||||
|
- Lunary
|
||||||
|
- MLflow
|
||||||
- Custom Callbacks
|
- Custom Callbacks
|
||||||
- Langsmith
|
- Langsmith
|
||||||
- DataDog
|
- DataDog
|
||||||
|
@ -184,107 +184,6 @@ Found under `kwargs["standard_logging_object"]`. This is a standard payload, log
|
||||||
|
|
||||||
[👉 **Standard Logging Payload Specification**](./logging_spec)
|
[👉 **Standard Logging Payload Specification**](./logging_spec)
|
||||||
|
|
||||||
## Lunary
|
|
||||||
### Step1: Install dependencies and set your environment variables
|
|
||||||
Install the dependencies
|
|
||||||
```shell
|
|
||||||
pip install litellm lunary
|
|
||||||
```
|
|
||||||
|
|
||||||
Get you Lunary public key from from https://app.lunary.ai/settings
|
|
||||||
```shell
|
|
||||||
export LUNARY_PUBLIC_KEY="<your-public-key>"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Create a `config.yaml` and set `lunary` callbacks
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
model_list:
|
|
||||||
- model_name: "*"
|
|
||||||
litellm_params:
|
|
||||||
model: "*"
|
|
||||||
litellm_settings:
|
|
||||||
success_callback: ["lunary"]
|
|
||||||
failure_callback: ["lunary"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Start the LiteLLM proxy
|
|
||||||
```shell
|
|
||||||
litellm --config config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Make a request
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d '{
|
|
||||||
"model": "gpt-4o",
|
|
||||||
"messages": [
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": "You are a helpful math tutor. Guide the user through the solution step by step."
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "how can I solve 8x + 7 = -23"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
## MLflow
|
|
||||||
|
|
||||||
|
|
||||||
### Step1: Install dependencies
|
|
||||||
Install the dependencies.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
pip install litellm mlflow
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Create a `config.yaml` with `mlflow` callback
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
model_list:
|
|
||||||
- model_name: "*"
|
|
||||||
litellm_params:
|
|
||||||
model: "*"
|
|
||||||
litellm_settings:
|
|
||||||
success_callback: ["mlflow"]
|
|
||||||
failure_callback: ["mlflow"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Start the LiteLLM proxy
|
|
||||||
```shell
|
|
||||||
litellm --config config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Make a request
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d '{
|
|
||||||
"model": "gpt-4o-mini",
|
|
||||||
"messages": [
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "What is the capital of France?"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Review traces
|
|
||||||
|
|
||||||
Run the following command to start MLflow UI and review recorded traces.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
mlflow ui
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Langfuse
|
## Langfuse
|
||||||
|
|
||||||
We will use the `--config` to set `litellm.success_callback = ["langfuse"]` this will log all successfull LLM calls to langfuse. Make sure to set `LANGFUSE_PUBLIC_KEY` and `LANGFUSE_SECRET_KEY` in your environment
|
We will use the `--config` to set `litellm.success_callback = ["langfuse"]` this will log all successfull LLM calls to langfuse. Make sure to set `LANGFUSE_PUBLIC_KEY` and `LANGFUSE_SECRET_KEY` in your environment
|
||||||
|
@ -1298,6 +1197,109 @@ LiteLLM supports customizing the following Datadog environment variables
|
||||||
| `HOSTNAME` | Hostname tag for your logs | "" | ❌ No |
|
| `HOSTNAME` | Hostname tag for your logs | "" | ❌ No |
|
||||||
| `POD_NAME` | Pod name tag (useful for Kubernetes deployments) | "unknown" | ❌ No |
|
| `POD_NAME` | Pod name tag (useful for Kubernetes deployments) | "unknown" | ❌ No |
|
||||||
|
|
||||||
|
|
||||||
|
## Lunary
|
||||||
|
### Step1: Install dependencies and set your environment variables
|
||||||
|
Install the dependencies
|
||||||
|
```shell
|
||||||
|
pip install litellm lunary
|
||||||
|
```
|
||||||
|
|
||||||
|
Get you Lunary public key from from https://app.lunary.ai/settings
|
||||||
|
```shell
|
||||||
|
export LUNARY_PUBLIC_KEY="<your-public-key>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Create a `config.yaml` and set `lunary` callbacks
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
model_list:
|
||||||
|
- model_name: "*"
|
||||||
|
litellm_params:
|
||||||
|
model: "*"
|
||||||
|
litellm_settings:
|
||||||
|
success_callback: ["lunary"]
|
||||||
|
failure_callback: ["lunary"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Start the LiteLLM proxy
|
||||||
|
```shell
|
||||||
|
litellm --config config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Make a request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d '{
|
||||||
|
"model": "gpt-4o",
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "You are a helpful math tutor. Guide the user through the solution step by step."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "how can I solve 8x + 7 = -23"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## MLflow
|
||||||
|
|
||||||
|
|
||||||
|
### Step1: Install dependencies
|
||||||
|
Install the dependencies.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
pip install litellm mlflow
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Create a `config.yaml` with `mlflow` callback
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
model_list:
|
||||||
|
- model_name: "*"
|
||||||
|
litellm_params:
|
||||||
|
model: "*"
|
||||||
|
litellm_settings:
|
||||||
|
success_callback: ["mlflow"]
|
||||||
|
failure_callback: ["mlflow"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Start the LiteLLM proxy
|
||||||
|
```shell
|
||||||
|
litellm --config config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Make a request
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d '{
|
||||||
|
"model": "gpt-4o-mini",
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "What is the capital of France?"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Review traces
|
||||||
|
|
||||||
|
Run the following command to start MLflow UI and review recorded traces.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
mlflow ui
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Custom Callback Class [Async]
|
## Custom Callback Class [Async]
|
||||||
|
|
||||||
Use this when you want to run custom callbacks in `python`
|
Use this when you want to run custom callbacks in `python`
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue