mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
updating docs
This commit is contained in:
parent
c830f78be9
commit
b67db13d94
1 changed files with 6 additions and 1 deletions
|
@ -18,11 +18,16 @@ from litellm import completion
|
|||
|
||||
## set env variables
|
||||
os.environ["HELICONE_API_KEY"] = "your-helicone-key"
|
||||
os.environ["OPENAI_API_KEY"], os.environ["COHERE_API_KEY"] = "", ""
|
||||
|
||||
# set callbacks
|
||||
litellm.success_callback=["helicone"]
|
||||
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
#openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
||||
|
||||
#cohere call
|
||||
response = completion(model="command-nightly", messages=[{"role": "user", "content": "Hi 👋 - i'm cohere"}])
|
||||
```
|
||||
|
||||
### Approach 2: [OpenAI + Azure only] Use Helicone as a proxy
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue