Feat: Add Langtrace integration (#5341)

* Feat: Add Langtrace integration

* add langtrace service name

* fix timestamps for traces

* add tests

* Discard Callback + use existing otel logger

* cleanup

* remove print statments

* remove callback

* add docs

* docs

* add logging docs

* format logging

* remove emoji and add litellm proxy example

* format logging

* format `logging.md`

* add langtrace docs to logging.md

* sync conflict
This commit is contained in:
Ali Waleed 2024-10-11 16:49:53 +03:00 committed by GitHub
parent 42174fde4e
commit 7ec414a3cf
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 291 additions and 0 deletions

View file

@ -0,0 +1,63 @@
import Image from '@theme/IdealImage';
# Langtrace AI
Monitor, evaluate & improve your LLM apps
## Pre-Requisites
Make an account on [Langtrace AI](https://langtrace.ai/login)
## Quick Start
Use just 2 lines of code, to instantly log your responses **across all providers** with langtrace
```python
litellm.callbacks = ["langtrace"]
langtrace.init()
```
```python
import litellm
import os
from langtrace_python_sdk import langtrace
# Langtrace API Keys
os.environ["LANGTRACE_API_KEY"] = "<your-api-key>"
# LLM API Keys
os.environ['OPENAI_API_KEY']="<openai-api-key>"
# set langtrace as a callback, litellm will send the data to langtrace
litellm.callbacks = ["langtrace"]
# init langtrace
langtrace.init()
# openai call
response = completion(
model="gpt-4o",
messages=[
{"content": "respond only in Yoda speak.", "role": "system"},
{"content": "Hello, how are you?", "role": "user"},
],
)
print(response)
```
### Using with LiteLLM Proxy
```yaml
model_list:
- model_name: gpt-4
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
litellm_settings:
callbacks: ["langtrace"]
environment_variables:
LANGTRACE_API_KEY: "141a****"
```

View file

@ -1307,6 +1307,47 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
Expect to see your log on Langfuse
<Image img={require('../../img/langsmith_new.png')} />
## Logging LLM IO to Langtrace
1. Set `success_callback: ["langtrace"]` on litellm config.yaml
```yaml
model_list:
- model_name: gpt-4
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/
litellm_settings:
callbacks: ["langtrace"]
environment_variables:
LANGTRACE_API_KEY: "141a****"
```
2. Start Proxy
```
litellm --config /path/to/config.yaml
```
3. Test it!
```bash
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "fake-openai-endpoint",
"messages": [
{
"role": "user",
"content": "Hello, Claude gm!"
}
],
}
'
## Logging LLM IO to Galileo
[BETA]