forked from phoenix/litellm-mirror
adding helicone integration to docs
This commit is contained in:
parent
95c4abdbc3
commit
d96e77e424
2 changed files with 52 additions and 1 deletions
50
docs/helicone_integration.md
Normal file
50
docs/helicone_integration.md
Normal file
|
@ -0,0 +1,50 @@
|
|||
# Helicone Tutorial
|
||||
Helicone is an open source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
|
||||
|
||||
## Use Helicone to log requests across all LLM Providers (OpenAI, Azure, Anthropic, Cohere, Replicate, PaLM)
|
||||
liteLLM provides `success_callbacks` and `failure_callbacks`, making it easy for you to send data to a particular provider depending on the status of your responses.
|
||||
|
||||
In this case, we want to log requests to Helicone when a request succeeds.
|
||||
|
||||
### Approach 1: Use Callbacks
|
||||
Use just 1 line of code, to instantly log your responses **across all providers** with helicone:
|
||||
```
|
||||
litellm.success_callback=["helicone"]
|
||||
```
|
||||
|
||||
Complete code
|
||||
```python
|
||||
from litellm import completion
|
||||
|
||||
## set env variables
|
||||
os.environ["HELICONE_API_KEY"] = "your-helicone-key"
|
||||
|
||||
# set callbacks
|
||||
litellm.success_callback=["helicone"]
|
||||
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
```
|
||||
|
||||
### Approach 2: [OpenAI + Azure only] Use Helicone as a proxy
|
||||
Helicone provides advanced functionality like caching, etc. Helicone currently supports this for Azure and OpenAI.
|
||||
|
||||
If you want to use Helicone to proxy your OpenAI/Azure requests, then you can -
|
||||
|
||||
- Set helicone as your base url via: `litellm.api_url`
|
||||
- Pass in helicone request headers via: `litellm.headers`
|
||||
|
||||
Complete Code
|
||||
```
|
||||
import litellm
|
||||
from litellm import completion
|
||||
|
||||
litellm.api_base = "https://oai.hconeai.com/v1"
|
||||
litellm.headers = {"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}"}
|
||||
|
||||
response = litellm.completion(
|
||||
model="gpt-3.5-turbo",
|
||||
messages=[{"role": "user", "content": "how does a court case get to the Supreme Court?"}]
|
||||
)
|
||||
|
||||
print(response)
|
||||
```
|
|
@ -7,9 +7,10 @@ nav:
|
|||
- 🤖 Supported LLM APIs:
|
||||
- Supported Completion & Chat APIs: supported.md
|
||||
- Supported Embedding APIs: supported_embedding.md
|
||||
- 💾 liteLLM Client - Logging Output:
|
||||
- 💾 Callbacks - Logging Output:
|
||||
- Quick Start: advanced.md
|
||||
- Output Integrations: client_integrations.md
|
||||
- Helicone Tutorial: helicone_integration.md
|
||||
- 💡 Support:
|
||||
- Troubleshooting & Help: troubleshoot.md
|
||||
- Contact Us: contact.md
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue