add missing references

This commit is contained in:
Vince Lwt 2023-08-21 21:38:44 +02:00
parent 61afceece1
commit 342b83544f
6 changed files with 52 additions and 43 deletions

View file

@ -6,8 +6,8 @@ liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for
liteLLM supports:
- [Helicone](https://docs.helicone.ai/introduction)
- [LLMonitor](https://llmonitor.com/docs)
- [Helicone](https://docs.helicone.ai/introduction)
- [Sentry](https://docs.sentry.io/platforms/python/)
- [PostHog](https://posthog.com/docs/libraries/python)
- [Slack](https://slack.dev/bolt-python/concepts)

View file

@ -17,36 +17,46 @@ const sidebars = {
// But you can create a sidebar manually
tutorialSidebar: [
{ type: "doc", id: "index" }, // NEW
{ type: "doc", id: "index" }, // NEW
{
type: 'category',
label: 'Completion()',
items: ['completion/input','completion/output'],
type: "category",
label: "Completion()",
items: ["completion/input", "completion/output"],
},
{
type: 'category',
label: 'Embedding()',
items: ['embedding/supported_embedding'],
type: "category",
label: "Embedding()",
items: ["embedding/supported_embedding"],
},
'debugging/local_debugging',
'completion/supported',
"debugging/local_debugging",
"completion/supported",
{
type: 'category',
label: 'Tutorials',
items: ['tutorials/huggingface_tutorial', 'tutorials/TogetherAI_liteLLM', 'tutorials/debugging_tutorial'],
type: "category",
label: "Tutorials",
items: [
"tutorials/huggingface_tutorial",
"tutorials/TogetherAI_liteLLM",
"tutorials/debugging_tutorial",
],
},
'token_usage',
'stream',
'secret',
'caching',
"token_usage",
"stream",
"secret",
"caching",
{
type: 'category',
label: 'Logging & Observability',
items: ['observability/callbacks', 'observability/integrations', 'observability/helicone_integration', 'observability/supabase_integration'],
type: "category",
label: "Logging & Observability",
items: [
"observability/callbacks",
"observability/integrations",
"observability/llmonitor_integration",
"observability/helicone_integration",
"observability/supabase_integration",
],
},
'troubleshoot',
'contributing',
'contact'
"troubleshoot",
"contributing",
"contact",
],
};

View file

@ -1,29 +1,30 @@
# Callbacks
## Use Callbacks to send Output Data to Posthog, Sentry etc
liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for you to send data to a particular provider depending on the status of your responses.
liteLLM supports:
liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for you to send data to a particular provider depending on the status of your responses.
liteLLM supports:
- [LLMonitor](https://llmonitor.com/docs)
- [Helicone](https://docs.helicone.ai/introduction)
- [Sentry](https://docs.sentry.io/platforms/python/)
- [Sentry](https://docs.sentry.io/platforms/python/)
- [PostHog](https://posthog.com/docs/libraries/python)
- [Slack](https://slack.dev/bolt-python/concepts)
### Quick Start
```python
from litellm import completion
# set callbacks
litellm.success_callback=["posthog", "helicone"]
litellm.failure_callback=["sentry"]
litellm.success_callback=["posthog", "helicone", "llmonitor"]
litellm.failure_callback=["sentry", "llmonitor"]
## set env variables
os.environ['SENTRY_API_URL'], os.environ['SENTRY_API_TRACE_RATE']= ""
os.environ['POSTHOG_API_KEY'], os.environ['POSTHOG_API_URL'] = "api-key", "api-url"
os.environ["HELICONE_API_KEY"] = ""
os.environ["HELICONE_API_KEY"] = ""
response = completion(model="gpt-3.5-turbo", messages=messages)
response = completion(model="gpt-3.5-turbo", messages=messages)
```

View file

@ -1,12 +1,9 @@
# Logging Integrations
| Integration | Required OS Variables | How to Use with callbacks |
|-----------------|--------------------------------------------|-------------------------------------------|
| Sentry | `SENTRY_API_URL` | `litellm.success_callback=["sentry"]` |
| Posthog | `POSTHOG_API_KEY`,`POSTHOG_API_URL` | `litellm.success_callback=["posthog"]` |
| Slack | `SLACK_API_TOKEN`,`SLACK_API_SECRET`,`SLACK_API_CHANNEL` | `litellm.success_callback=["slack"]` |
| Helicone | `HELICONE_API_TOKEN` | `litellm.success_callback=["helicone"]` |
| Integration | Required OS Variables | How to Use with callbacks |
| ----------- | -------------------------------------------------------- | ---------------------------------------- |
| LLMonitor | `LLMONITOR_APP_ID` | `litellm.success_callback=["llmonitor"]` |
| Sentry | `SENTRY_API_URL` | `litellm.success_callback=["sentry"]` |
| Posthog | `POSTHOG_API_KEY`,`POSTHOG_API_URL` | `litellm.success_callback=["posthog"]` |
| Slack | `SLACK_API_TOKEN`,`SLACK_API_SECRET`,`SLACK_API_CHANNEL` | `litellm.success_callback=["slack"]` |
| Helicone | `HELICONE_API_TOKEN` | `litellm.success_callback=["helicone"]` |

View file

@ -16,6 +16,7 @@ nav:
- 💾 Callbacks - Logging Output:
- Quick Start: advanced.md
- Output Integrations: client_integrations.md
- LLMonitor Tutorial: llmonitor_integration.md
- Helicone Tutorial: helicone_integration.md
- Supabase Tutorial: supabase_integration.md
- BerriSpend Tutorial: berrispend_integration.md

View file

@ -33,7 +33,7 @@
- Call all models using the OpenAI format - `completion(model, messages)`
- Text responses will always be available at `['choices'][0]['message']['content']`
- **Error Handling** Using Model Fallbacks (if `GPT-4` fails, try `llama2`)
- **Logging** - Log Requests, Responses and Errors to `Supabase`, `Posthog`, `Mixpanel`, `Sentry`, `Helicone`, `LLMonitor` (Any of the supported providers here: https://litellm.readthedocs.io/en/latest/advanced/
- **Logging** - Log Requests, Responses and Errors to `Supabase`, `Posthog`, `Mixpanel`, `Sentry`, `LLMonitor`, `Helicone` (Any of the supported providers here: https://litellm.readthedocs.io/en/latest/advanced/
**Example: Logs sent to Supabase**
<img width="1015" alt="Screenshot 2023-08-11 at 4 02 46 PM" src="https://github.com/ishaan-jaff/proxy-server/assets/29436595/237557b8-ba09-4917-982c-8f3e1b2c8d08">