forked from phoenix/litellm-mirror
feat: traceloop docs
This commit is contained in:
parent
8a76c80039
commit
c470ebf615
8 changed files with 50 additions and 2 deletions
|
@ -8,6 +8,7 @@ liteLLM supports:
|
|||
|
||||
- [LLMonitor](https://llmonitor.com/docs)
|
||||
- [Helicone](https://docs.helicone.ai/introduction)
|
||||
- [Traceloop](https://traceloop.com/docs)
|
||||
- [Sentry](https://docs.sentry.io/platforms/python/)
|
||||
- [PostHog](https://posthog.com/docs/libraries/python)
|
||||
- [Slack](https://slack.dev/bolt-python/concepts)
|
||||
|
@ -25,6 +26,7 @@ litellm.failure_callback=["sentry", "llmonitor"]
|
|||
os.environ['SENTRY_API_URL'], os.environ['SENTRY_API_TRACE_RATE']= ""
|
||||
os.environ['POSTHOG_API_KEY'], os.environ['POSTHOG_API_URL'] = "api-key", "api-url"
|
||||
os.environ["HELICONE_API_KEY"] = ""
|
||||
os.environ["TRACELOOP_API_KEY"] = ""
|
||||
os.environ["LLMONITOR_APP_ID"] = ""
|
||||
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
|
|
|
@ -7,4 +7,5 @@
|
|||
| Sentry | `SENTRY_API_URL` | `litellm.success_callback=["sentry"]` |
|
||||
| Posthog | `POSTHOG_API_KEY`,`POSTHOG_API_URL` | `litellm.success_callback=["posthog"]` |
|
||||
| Slack | `SLACK_API_TOKEN`,`SLACK_API_SECRET`,`SLACK_API_CHANNEL` | `litellm.success_callback=["slack"]` |
|
||||
| Traceloop | `TRACELOOP_API_TOKEN` | `litellm.success_callback=["traceloop"]` |
|
||||
| Helicone | `HELICONE_API_TOKEN` | `litellm.success_callback=["helicone"]` |
|
||||
|
|
34
docs/my-website/docs/observability/traceloop_integration.md
Normal file
34
docs/my-website/docs/observability/traceloop_integration.md
Normal file
|
@ -0,0 +1,34 @@
|
|||
# Traceloop Tutorial
|
||||
|
||||
[Traceloop](https://traceloop.com) is a platform for monitoring and debugging the quality of your LLM outputs.
|
||||
It provides you with a way to track the performance of your LLM application; rollout changes with confidence; and debug issues in production.
|
||||
It is based on [OpenTelemetry](https://opentelemetry.io), so it can provide full visibility to your LLM requests, as well vector DB usage, and other infra in your stack.
|
||||
|
||||
<Image img={require('../../img/traceloop_dash.png')} />
|
||||
|
||||
## Getting Started
|
||||
|
||||
First, sign up to get an API key on the [Traceloop dashboard](https://app.traceloop.com).
|
||||
While Traceloop is still in beta, [ping them](nir@traceloop.com) and mention you're using LiteLLM to get your early access code.
|
||||
|
||||
Then, install the Traceloop SDK:
|
||||
|
||||
```bash
|
||||
pip install traceloop
|
||||
```
|
||||
|
||||
Use just 1 line of code, to instantly log your LLM responses:
|
||||
|
||||
```python
|
||||
litellm.success_callback = ["traceloop"]
|
||||
```
|
||||
|
||||
When running your app, make sure to set the `TRACELOOP_API_KEY` environment variable to your API key.
|
||||
|
||||
To get better visualizations on how your code behaves, you may want to annotate specific parts of your LLM chain. See [Traceloop docs on decorators](https://traceloop.com/docs/python-sdk/decorators) for more information.
|
||||
|
||||
Here's an example PR for such integration: https://github.com/Codium-ai/pr-agent/pull/244
|
||||
|
||||
## Support
|
||||
|
||||
For any question or issue with integration you can reach out to the Traceloop team on [Slack](https://join.slack.com/t/traceloopcommunity/shared_invite/zt-1plpfpm6r-zOHKI028VkpcWdobX65C~g) or via [email](mailto:dev@traceloop.com).
|
BIN
docs/my-website/img/traceloop_dash.png
Normal file
BIN
docs/my-website/img/traceloop_dash.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 65 KiB |
Loading…
Add table
Add a link
Reference in a new issue