forked from phoenix/litellm-mirror
v2
This commit is contained in:
parent
5dd5acd2ee
commit
6569b435d1
8 changed files with 149 additions and 0 deletions
23
docs/advanced.md
Normal file
23
docs/advanced.md
Normal file
|
@ -0,0 +1,23 @@
|
|||
# Advanced - liteLLM client
|
||||
|
||||
## Use liteLLM client to send Output Data to Posthog, Sentry etc
|
||||
liteLLM allows you to create `completion_client` and `embedding_client` to send successfull / error LLM API call data to Posthog, Sentry, Slack etc
|
||||
|
||||
### Quick Start
|
||||
```python
|
||||
from main import litellm_client
|
||||
import os
|
||||
|
||||
## set env variables
|
||||
os.environ['SENTRY_API_URL'] = ""
|
||||
os.environ['POSTHOG_API_KEY'], os.environ['POSTHOG_API_URL'] = "api-key", "api-url"
|
||||
|
||||
# init liteLLM client
|
||||
client = litellm_client(success_callback=["posthog"], failure_callback=["sentry", "posthog"])
|
||||
completion = client.completion
|
||||
embedding = client.embedding
|
||||
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
```
|
||||
|
||||
|
11
docs/client_integrations.md
Normal file
11
docs/client_integrations.md
Normal file
|
@ -0,0 +1,11 @@
|
|||
# Data Logging Integrations
|
||||
|
||||
| Integration | Required OS Variables | How to Use with litellm Client |
|
||||
|-----------------|--------------------------------------------|-------------------------------------------|
|
||||
| Sentry | `SENTRY_API_URL` | `client = litellm_client(success_callback=["sentry"], failure_callback=["sentry"])` |
|
||||
| Posthog | `POSTHOG_API_KEY`,<br>`POSTHOG_API_URL` | `client = litellm_client(success_callback=["posthog"], failure_callback=["posthog"])` |
|
||||
| Slack | `SLACK_API_TOKEN`,<br>`SLACK_API_SECRET`,<br>`SLACK_API_CHANNEL` | `client = litellm_client(success_callback=["slack"], failure_callback=["slack"])` |
|
||||
|
||||
|
||||
|
||||
|
6
docs/contact.md
Normal file
6
docs/contact.md
Normal file
|
@ -0,0 +1,6 @@
|
|||
# Contact Us
|
||||
|
||||
[](https://discord.gg/wuPM9dRgDw)
|
||||
|
||||
* [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
|
||||
* Contact us at ishaan@berri.ai / krrish@berri.ai
|
39
docs/index.md
Normal file
39
docs/index.md
Normal file
|
@ -0,0 +1,39 @@
|
|||
# *🚅 litellm*
|
||||
a light 100 line package to simplify calling OpenAI, Azure, Cohere, Anthropic APIs
|
||||
|
||||
###### litellm manages:
|
||||
* Calling all LLM APIs using the OpenAI format - `completion(model, messages)`
|
||||
* Consistent output for all LLM APIs, text responses will always be available at `['choices'][0]['message']['content']`
|
||||
* **[Advanced]** Automatically logging your output to Sentry, Posthog, Slack [see liteLLM Client](/docs/advanced.md)
|
||||
|
||||
## Quick Start
|
||||
Go directly to code: [Getting Started Notebook](https://colab.research.google.com/drive/1gR3pY-JzDZahzpVdbGBtrNGDBmzUNJaJ?usp=sharing)
|
||||
### Installation
|
||||
```
|
||||
pip install litellm
|
||||
```
|
||||
|
||||
### Usage
|
||||
```python
|
||||
from litellm import completion
|
||||
|
||||
## set ENV variables
|
||||
os.environ["OPENAI_API_KEY"] = "openai key"
|
||||
os.environ["COHERE_API_KEY"] = "cohere key"
|
||||
|
||||
messages = [{ "content": "Hello, how are you?","role": "user"}]
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
|
||||
# cohere call
|
||||
response = completion("command-nightly", messages)
|
||||
```
|
||||
Need Help / Support : [see troubleshooting](/docs/troubleshoot.md)
|
||||
|
||||
## Why did we build liteLLM
|
||||
- **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere
|
||||
|
||||
## Support
|
||||
* [Meet with us 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
|
||||
* Contact us at ishaan@berri.ai / krrish@berri.ai
|
41
docs/supported.md
Normal file
41
docs/supported.md
Normal file
|
@ -0,0 +1,41 @@
|
|||
## Generation/Completion/Chat Completion Models
|
||||
|
||||
### OpenAI Chat Completion Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|------------------|----------------------------------------|--------------------------------------|
|
||||
| gpt-3.5-turbo | `completion('gpt-3.5-turbo', messages)` | `os.environ['OPENAI_API_KEY']` |
|
||||
| gpt-4 | `completion('gpt-4', messages)` | `os.environ['OPENAI_API_KEY']` |
|
||||
|
||||
## Azure OpenAI Chat Completion Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|------------------|-----------------------------------------|-------------------------------------------|
|
||||
| gpt-3.5-turbo | `completion('gpt-3.5-turbo', messages, azure=True)` | `os.environ['AZURE_API_KEY']`,<br>`os.environ['AZURE_API_BASE']`,<br>`os.environ['AZURE_API_VERSION']` |
|
||||
| gpt-4 | `completion('gpt-4', messages, azure=True)` | `os.environ['AZURE_API_KEY']`,<br>`os.environ['AZURE_API_BASE']`,<br>`os.environ['AZURE_API_VERSION']` |
|
||||
|
||||
### OpenAI Text Completion Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|------------------|--------------------------------------------|--------------------------------------|
|
||||
| text-davinci-003 | `completion('text-davinci-003', messages)` | `os.environ['OPENAI_API_KEY']` |
|
||||
|
||||
### Cohere Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|------------------|--------------------------------------------|--------------------------------------|
|
||||
| command-nightly | `completion('command-nightly', messages)` | `os.environ['COHERE_API_KEY']` |
|
||||
|
||||
### OpenRouter Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|----------------------------------|----------------------------------------------------------------------|---------------------------------------------------------------------------|
|
||||
| google/palm-2-codechat-bison | `completion('google/palm-2-codechat-bison', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| google/palm-2-chat-bison | `completion('google/palm-2-chat-bison', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| openai/gpt-3.5-turbo | `completion('openai/gpt-3.5-turbo', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| openai/gpt-3.5-turbo-16k | `completion('openai/gpt-3.5-turbo-16k', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| openai/gpt-4-32k | `completion('openai/gpt-4-32k', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| anthropic/claude-2 | `completion('anthropic/claude-2', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| anthropic/claude-instant-v1 | `completion('anthropic/claude-instant-v1', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| meta-llama/llama-2-13b-chat | `completion('meta-llama/llama-2-13b-chat', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
||||
| meta-llama/llama-2-70b-chat | `completion('meta-llama/llama-2-70b-chat', messages)` | `os.environ['OPENROUTER_API_KEY']`,<br>`os.environ['OR_SITE_URL']`,<br>`os.environ['OR_APP_NAME']` |
|
5
docs/supported_embedding.md
Normal file
5
docs/supported_embedding.md
Normal file
|
@ -0,0 +1,5 @@
|
|||
## Embedding Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|----------------------|---------------------------------------------|--------------------------------------|
|
||||
| text-embedding-ada-002 | `embedding('text-embedding-ada-002', input)` | `os.environ['OPENAI_API_KEY']` |
|
9
docs/troubleshoot.md
Normal file
9
docs/troubleshoot.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
## Stable Version
|
||||
|
||||
If you're running into problems with installation / Usage
|
||||
Use the stable version of litellm
|
||||
|
||||
```
|
||||
pip install litellm==0.1.1
|
||||
```
|
||||
|
15
mkdocs.yml
Normal file
15
mkdocs.yml
Normal file
|
@ -0,0 +1,15 @@
|
|||
site_name: liteLLM
|
||||
nav:
|
||||
- ⚡ Getting Started:
|
||||
- Installation & Quick Start: index.md
|
||||
- 🤖 Supported LLM APIs:
|
||||
- Supported Completion & Chat APIs: supported.md
|
||||
- Supported Embedding APIs: supported_embedding.md
|
||||
- 💾 liteLLM Client - Logging Output:
|
||||
- Quick Start: advanced.md
|
||||
- Output Integrations: client_integrations.md
|
||||
- 💡 Support:
|
||||
- Troubleshooting & Help: troubleshoot.md
|
||||
- Contact Us: contact.md
|
||||
|
||||
theme: readthedocs
|
Loading…
Add table
Add a link
Reference in a new issue