mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
* Feat: Add Langtrace integration * add langtrace service name * fix timestamps for traces * add tests * Discard Callback + use existing otel logger * cleanup * remove print statments * remove callback * add docs * docs * add logging docs * format logging * remove emoji and add litellm proxy example * format logging * format `logging.md` * add langtrace docs to logging.md * sync conflict
1.2 KiB
1.2 KiB
import Image from '@theme/IdealImage';
Langtrace AI
Monitor, evaluate & improve your LLM apps
Pre-Requisites
Make an account on Langtrace AI
Quick Start
Use just 2 lines of code, to instantly log your responses across all providers with langtrace
litellm.callbacks = ["langtrace"]
langtrace.init()
import litellm
import os
from langtrace_python_sdk import langtrace
# Langtrace API Keys
os.environ["LANGTRACE_API_KEY"] = "<your-api-key>"
# LLM API Keys
os.environ['OPENAI_API_KEY']="<openai-api-key>"
# set langtrace as a callback, litellm will send the data to langtrace
litellm.callbacks = ["langtrace"]
# init langtrace
langtrace.init()
# openai call
response = completion(
model="gpt-4o",
messages=[
{"content": "respond only in Yoda speak.", "role": "system"},
{"content": "Hello, how are you?", "role": "user"},
],
)
print(response)
Using with LiteLLM Proxy
model_list:
- model_name: gpt-4
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
litellm_settings:
callbacks: ["langtrace"]
environment_variables:
LANGTRACE_API_KEY: "141a****"