diff --git a/docs/my-website/docs/observability/integrations.md b/docs/my-website/docs/observability/integrations.md index ba5862478..3f4c8616d 100644 --- a/docs/my-website/docs/observability/integrations.md +++ b/docs/my-website/docs/observability/integrations.md @@ -2,6 +2,7 @@ | Integration | Required OS Variables | How to Use with callbacks | | ----------- | -------------------------------------------------------- | ---------------------------------------- | +| Promptlayer | `PROMPLAYER_API_KEY` | `litellm.success_callback=["promptlayer"]` | | LLMonitor | `LLMONITOR_APP_ID` | `litellm.success_callback=["llmonitor"]` | | Sentry | `SENTRY_API_URL` | `litellm.success_callback=["sentry"]` | | Posthog | `POSTHOG_API_KEY`,`POSTHOG_API_URL` | `litellm.success_callback=["posthog"]` | diff --git a/docs/my-website/docs/observability/promptlayer_integration.md b/docs/my-website/docs/observability/promptlayer_integration.md new file mode 100644 index 000000000..c9262fba6 --- /dev/null +++ b/docs/my-website/docs/observability/promptlayer_integration.md @@ -0,0 +1,43 @@ +# Promptlayer Tutorial + +Promptlayer is a platform for prompt engineers. Log OpenAI requests. Search usage history. Track performance. Visually manage prompt templates. + + + +## Use Promptlayer to log requests across all LLM Providers (OpenAI, Azure, Anthropic, Cohere, Replicate, PaLM) + +liteLLM provides `callbacks`, making it easy for you to log data depending on the status of your responses. + +### Using Callbacks + +# Get your PromptLayer API Key from https://promptlayer.com/ + +Use just 2 lines of code, to instantly log your responses **across all providers** with promptlayer: + +```python +litellm.success_callback = ["promptlayer"] + +``` + +Complete code + +```python +from litellm import completion + +## set env variables +os.environ["PROMPTLAYER_API_KEY"] = "your" + +os.environ["OPENAI_API_KEY"], os.environ["COHERE_API_KEY"] = "", "" + +# set callbacks +litellm.success_callback = ["promptlayer"] +litellm.failure_callback = ["llmonitor"] + +#openai call +response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}]) + +#cohere call +response = completion(model="command-nightly", messages=[{"role": "user", "content": "Hi 👋 - i'm cohere"}]) +```