diff --git a/docs/my-website/docs/observability/callbacks.md b/docs/my-website/docs/observability/callbacks.md index 74641f147..4f9e91944 100644 --- a/docs/my-website/docs/observability/callbacks.md +++ b/docs/my-website/docs/observability/callbacks.md @@ -6,8 +6,8 @@ liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for liteLLM supports: -- [Helicone](https://docs.helicone.ai/introduction) - [LLMonitor](https://llmonitor.com/docs) +- [Helicone](https://docs.helicone.ai/introduction) - [Sentry](https://docs.sentry.io/platforms/python/) - [PostHog](https://posthog.com/docs/libraries/python) - [Slack](https://slack.dev/bolt-python/concepts) diff --git a/docs/my-website/sidebars.js b/docs/my-website/sidebars.js index 53735fbf3..e37da776a 100644 --- a/docs/my-website/sidebars.js +++ b/docs/my-website/sidebars.js @@ -17,36 +17,46 @@ const sidebars = { // But you can create a sidebar manually tutorialSidebar: [ - { type: "doc", id: "index" }, // NEW + { type: "doc", id: "index" }, // NEW { - type: 'category', - label: 'Completion()', - items: ['completion/input','completion/output'], + type: "category", + label: "Completion()", + items: ["completion/input", "completion/output"], }, { - type: 'category', - label: 'Embedding()', - items: ['embedding/supported_embedding'], + type: "category", + label: "Embedding()", + items: ["embedding/supported_embedding"], }, - 'debugging/local_debugging', - 'completion/supported', + "debugging/local_debugging", + "completion/supported", { - type: 'category', - label: 'Tutorials', - items: ['tutorials/huggingface_tutorial', 'tutorials/TogetherAI_liteLLM', 'tutorials/debugging_tutorial'], + type: "category", + label: "Tutorials", + items: [ + "tutorials/huggingface_tutorial", + "tutorials/TogetherAI_liteLLM", + "tutorials/debugging_tutorial", + ], }, - 'token_usage', - 'stream', - 'secret', - 'caching', + "token_usage", + "stream", + "secret", + "caching", { - type: 'category', - label: 'Logging & Observability', - items: ['observability/callbacks', 'observability/integrations', 'observability/helicone_integration', 'observability/supabase_integration'], + type: "category", + label: "Logging & Observability", + items: [ + "observability/callbacks", + "observability/integrations", + "observability/llmonitor_integration", + "observability/helicone_integration", + "observability/supabase_integration", + ], }, - 'troubleshoot', - 'contributing', - 'contact' + "troubleshoot", + "contributing", + "contact", ], }; diff --git a/docs/my-website/src/pages/observability/callbacks.md b/docs/my-website/src/pages/observability/callbacks.md index 7ac67b30d..323d73580 100644 --- a/docs/my-website/src/pages/observability/callbacks.md +++ b/docs/my-website/src/pages/observability/callbacks.md @@ -1,29 +1,30 @@ # Callbacks ## Use Callbacks to send Output Data to Posthog, Sentry etc -liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for you to send data to a particular provider depending on the status of your responses. -liteLLM supports: +liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for you to send data to a particular provider depending on the status of your responses. +liteLLM supports: + +- [LLMonitor](https://llmonitor.com/docs) - [Helicone](https://docs.helicone.ai/introduction) -- [Sentry](https://docs.sentry.io/platforms/python/) +- [Sentry](https://docs.sentry.io/platforms/python/) - [PostHog](https://posthog.com/docs/libraries/python) - [Slack](https://slack.dev/bolt-python/concepts) ### Quick Start + ```python from litellm import completion # set callbacks -litellm.success_callback=["posthog", "helicone"] -litellm.failure_callback=["sentry"] +litellm.success_callback=["posthog", "helicone", "llmonitor"] +litellm.failure_callback=["sentry", "llmonitor"] ## set env variables os.environ['SENTRY_API_URL'], os.environ['SENTRY_API_TRACE_RATE']= "" os.environ['POSTHOG_API_KEY'], os.environ['POSTHOG_API_URL'] = "api-key", "api-url" -os.environ["HELICONE_API_KEY"] = "" +os.environ["HELICONE_API_KEY"] = "" -response = completion(model="gpt-3.5-turbo", messages=messages) +response = completion(model="gpt-3.5-turbo", messages=messages) ``` - - diff --git a/docs/my-website/src/pages/observability/integrations.md b/docs/my-website/src/pages/observability/integrations.md index 6b6d535be..ba5862478 100644 --- a/docs/my-website/src/pages/observability/integrations.md +++ b/docs/my-website/src/pages/observability/integrations.md @@ -1,12 +1,9 @@ # Logging Integrations -| Integration | Required OS Variables | How to Use with callbacks | -|-----------------|--------------------------------------------|-------------------------------------------| -| Sentry | `SENTRY_API_URL` | `litellm.success_callback=["sentry"]` | -| Posthog | `POSTHOG_API_KEY`,`POSTHOG_API_URL` | `litellm.success_callback=["posthog"]` | -| Slack | `SLACK_API_TOKEN`,`SLACK_API_SECRET`,`SLACK_API_CHANNEL` | `litellm.success_callback=["slack"]` | -| Helicone | `HELICONE_API_TOKEN` | `litellm.success_callback=["helicone"]` | - - - - +| Integration | Required OS Variables | How to Use with callbacks | +| ----------- | -------------------------------------------------------- | ---------------------------------------- | +| LLMonitor | `LLMONITOR_APP_ID` | `litellm.success_callback=["llmonitor"]` | +| Sentry | `SENTRY_API_URL` | `litellm.success_callback=["sentry"]` | +| Posthog | `POSTHOG_API_KEY`,`POSTHOG_API_URL` | `litellm.success_callback=["posthog"]` | +| Slack | `SLACK_API_TOKEN`,`SLACK_API_SECRET`,`SLACK_API_CHANNEL` | `litellm.success_callback=["slack"]` | +| Helicone | `HELICONE_API_TOKEN` | `litellm.success_callback=["helicone"]` | diff --git a/mkdocs.yml b/mkdocs.yml index 97ed0d9ed..b3c88c741 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -16,6 +16,7 @@ nav: - 💾 Callbacks - Logging Output: - Quick Start: advanced.md - Output Integrations: client_integrations.md + - LLMonitor Tutorial: llmonitor_integration.md - Helicone Tutorial: helicone_integration.md - Supabase Tutorial: supabase_integration.md - BerriSpend Tutorial: berrispend_integration.md diff --git a/proxy-server/readme.md b/proxy-server/readme.md index edd03de3f..9c3c13934 100644 --- a/proxy-server/readme.md +++ b/proxy-server/readme.md @@ -33,7 +33,7 @@ - Call all models using the OpenAI format - `completion(model, messages)` - Text responses will always be available at `['choices'][0]['message']['content']` - **Error Handling** Using Model Fallbacks (if `GPT-4` fails, try `llama2`) -- **Logging** - Log Requests, Responses and Errors to `Supabase`, `Posthog`, `Mixpanel`, `Sentry`, `Helicone`, `LLMonitor` (Any of the supported providers here: https://litellm.readthedocs.io/en/latest/advanced/ +- **Logging** - Log Requests, Responses and Errors to `Supabase`, `Posthog`, `Mixpanel`, `Sentry`, `LLMonitor`, `Helicone` (Any of the supported providers here: https://litellm.readthedocs.io/en/latest/advanced/ **Example: Logs sent to Supabase** Screenshot 2023-08-11 at 4 02 46 PM