litellm/docs/my-website/docs/debugging/hosted_debugging.md
2024-03-29 13:10:26 -07:00

2.8 KiB

import Image from '@theme/IdealImage'; import QueryParamReader from '../../src/components/queryParamReader.js'

[Beta] Monitor Logs in Production

:::note

This is in beta. Expect frequent updates, as we improve based on your feedback.

:::

LiteLLM provides an integration to let you monitor logs in production.

👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/

<Image img={require('../../img/alt_dashboard.png')} alt="Dashboard" />

Debug your first logs

Open In Colab

1. Get your LiteLLM Token

Go to admin.litellm.ai and copy the code snippet with your unique token

<Image img={require('../../img/hosted_debugger_usage_page.png')} alt="Usage" />

2. Set up your environment

Add it to your .env

import os 

os.env["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token

Turn on LiteLLM Client

import litellm 
litellm.client = True

3. Make a normal completion() call

import litellm 
from litellm import completion
import os 

# set env variables
os.environ["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
os.environ["OPENAI_API_KEY"] = "openai key"

litellm.use_client = True # enable logging dashboard 
messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

Your completion() call print with a link to your session dashboard (https://admin.litellm.ai/<your_unique_token>)

In the above case it would be: admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb

Click on your personal dashboard link. Here's how you can find it 👇

<Image img={require('../../img/dash_output.png')} alt="Dashboard" />

👋 Tell us if you need better privacy controls

3. Review request log

Oh! Looks like our request was made successfully. Let's click on it and see exactly what got sent to the LLM provider.

Ah! So we can see that this request was made to a Baseten (see litellm_params > custom_llm_provider) for a model with ID - 7qQNLDB (see model). The message sent was - "Hey, how's it going?" and the response received was - "As an AI language model, I don't have feelings or emotions, but I can assist you with your queries. How can I assist you today?"

<Image img={require('../../img/dashboard_log.png')} alt="Dashboard Log Row" />

:::info

🎉 Congratulations! You've successfully debugger your first log!

:::