forked from phoenix/litellm-mirror
Merge branch 'BerriAI:main' into main
This commit is contained in:
commit
1c93c9c945
38 changed files with 8936 additions and 114 deletions
|
@ -3,7 +3,6 @@
|
|||
[](https://pypi.org/project/litellm/0.1.1/)
|
||||
[](https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main)
|
||||

|
||||
[](https://github.com/BerriAI/litellm)
|
||||
|
||||
[](https://discord.gg/wuPM9dRgDw)
|
||||
|
||||
|
@ -12,10 +11,11 @@ a light package to simplify calling OpenAI, Azure, Cohere, Anthropic, Huggingfac
|
|||
- guarantees [consistent output](https://litellm.readthedocs.io/en/latest/output/), text responses will always be available at `['choices'][0]['message']['content']`
|
||||
- exception mapping - common exceptions across providers are mapped to the [OpenAI exception types](https://help.openai.com/en/articles/6897213-openai-library-error-types-guidance)
|
||||
# usage
|
||||
<a href='https://docs.litellm.ai/docs/completion/supported' target="_blank"><img alt='None' src='https://img.shields.io/badge/Supported_LLMs-100000?style=for-the-badge&logo=None&logoColor=000000&labelColor=000000&color=8400EA'/></a>
|
||||
<a href='https://docs.litellm.ai/docs/completion/supported' target="_blank"><img alt='None' src='https://img.shields.io/badge/100+_Supported_LLMs_liteLLM-100000?style=for-the-badge&logo=None&logoColor=000000&labelColor=000000&color=8400EA'/></a>
|
||||
|
||||
Demo - https://litellm.ai/playground \
|
||||
Read the docs - https://docs.litellm.ai/docs/
|
||||
Demo - https://litellm.ai/playground
|
||||
Docs - https://docs.litellm.ai/docs/
|
||||
**Free** Dashboard - https://docs.litellm.ai/docs/debugging/hosted_debugging
|
||||
|
||||
## quick start
|
||||
```
|
||||
|
|
BIN
dist/litellm-0.1.432-py3-none-any.whl
vendored
Normal file
BIN
dist/litellm-0.1.432-py3-none-any.whl
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.432.tar.gz
vendored
Normal file
BIN
dist/litellm-0.1.432.tar.gz
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.434-py3-none-any.whl
vendored
Normal file
BIN
dist/litellm-0.1.434-py3-none-any.whl
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.434.tar.gz
vendored
Normal file
BIN
dist/litellm-0.1.434.tar.gz
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.435-py3-none-any.whl
vendored
Normal file
BIN
dist/litellm-0.1.435-py3-none-any.whl
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.435.tar.gz
vendored
Normal file
BIN
dist/litellm-0.1.435.tar.gz
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.446-py3-none-any.whl
vendored
Normal file
BIN
dist/litellm-0.1.446-py3-none-any.whl
vendored
Normal file
Binary file not shown.
BIN
dist/litellm-0.1.446.tar.gz
vendored
Normal file
BIN
dist/litellm-0.1.446.tar.gz
vendored
Normal file
Binary file not shown.
|
@ -21,8 +21,8 @@ liteLLM reads key naming, all keys should be named in the following format:
|
|||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|------------------|-----------------------------------------|-------------------------------------------|
|
||||
| gpt-3.5-turbo | `completion('gpt-3.5-turbo', messages, azure=True)` | `os.environ['AZURE_API_KEY']`,`os.environ['AZURE_API_BASE']`,`os.environ['AZURE_API_VERSION']` |
|
||||
| gpt-4 | `completion('gpt-4', messages, azure=True)` | `os.environ['AZURE_API_KEY']`,`os.environ['AZURE_API_BASE']`,`os.environ['AZURE_API_VERSION']` |
|
||||
| gpt-3.5-turbo | `completion('gpt-3.5-turbo', messages, custom_llm_provider="azure")` | `os.environ['AZURE_API_KEY']`,`os.environ['AZURE_API_BASE']`,`os.environ['AZURE_API_VERSION']` |
|
||||
| gpt-4 | `completion('gpt-4', messages, custom_llm_provider="azure")` | `os.environ['AZURE_API_KEY']`,`os.environ['AZURE_API_BASE']`,`os.environ['AZURE_API_VERSION']` |
|
||||
|
||||
### OpenAI Text Completion Models
|
||||
|
||||
|
|
51
docs/my-website/docs/debugging/hosted_debugging.md
Normal file
51
docs/my-website/docs/debugging/hosted_debugging.md
Normal file
|
@ -0,0 +1,51 @@
|
|||
import Image from '@theme/IdealImage';
|
||||
|
||||
# Debugging Dashboard
|
||||
LiteLLM offers a free hosted debugger UI for your api calls (https://admin.litellm.ai/). Useful if you're testing your LiteLLM server and need to see if the API calls were made successfully.
|
||||
|
||||
**Needs litellm>=0.1.438***
|
||||
|
||||
You can enable this setting `litellm.debugger=True`.
|
||||
|
||||
<Image img={require('../../img/dashboard.png')} alt="Dashboard" />
|
||||
|
||||
See our live dashboard 👉 [admin.litellm.ai](https://admin.litellm.ai/)
|
||||
|
||||
## Setup
|
||||
|
||||
By default, your dashboard is viewable at `admin.litellm.ai/<your_email>`.
|
||||
|
||||
```
|
||||
import litellm, os
|
||||
|
||||
## Set your email
|
||||
os.environ["LITELLM_EMAIL"] = "your_user_email"
|
||||
|
||||
## Set debugger to true
|
||||
litellm.debugger = True
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
import litellm
|
||||
from litellm import embedding, completion
|
||||
import os
|
||||
|
||||
## Set ENV variable
|
||||
os.environ["LITELLM_EMAIL"] = "your_email"
|
||||
|
||||
## Set debugger to true
|
||||
litellm.debugger = True
|
||||
|
||||
user_message = "Hello, how are you?"
|
||||
messages = [{ "content": user_message,"role": "user"}]
|
||||
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
||||
|
||||
# bad request call
|
||||
response = completion(model="chatgpt-test", messages=[{"role": "user", "content": "Hi 👋 - i'm a bad request"}])
|
||||
```
|
||||
|
63
docs/my-website/docs/debugging/local_debugging.md
Normal file
63
docs/my-website/docs/debugging/local_debugging.md
Normal file
|
@ -0,0 +1,63 @@
|
|||
# Local Debugging
|
||||
There's 2 ways to do local debugging - `litellm.set_verbose=True` and by passing in a custom function `completion(...logger_fn=<your_local_function>)`
|
||||
|
||||
## Set Verbose
|
||||
|
||||
This is good for getting print statements for everything litellm is doing.
|
||||
```
|
||||
from litellm import completion
|
||||
|
||||
litellm.set_verbose=True # 👈 this is the 1-line change you need to make
|
||||
|
||||
## set ENV variables
|
||||
os.environ["OPENAI_API_KEY"] = "openai key"
|
||||
os.environ["COHERE_API_KEY"] = "cohere key"
|
||||
|
||||
messages = [{ "content": "Hello, how are you?","role": "user"}]
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
|
||||
# cohere call
|
||||
response = completion("command-nightly", messages)
|
||||
```
|
||||
|
||||
## Logger Function
|
||||
But sometimes all you care about is seeing exactly what's getting sent to your api call and what's being returned - e.g. if the api call is failing, why is that happening? what are the exact params being set?
|
||||
|
||||
In that case, LiteLLM allows you to pass in a custom logging function to see / modify the model call Input/Outputs.
|
||||
|
||||
**Note**: We expect you to accept a dict object.
|
||||
|
||||
Your custom function
|
||||
|
||||
```
|
||||
def my_custom_logging_fn(model_call_dict):
|
||||
print(f"model call details: {model_call_dict}")
|
||||
```
|
||||
|
||||
### Complete Example
|
||||
```
|
||||
from litellm import completion
|
||||
|
||||
def my_custom_logging_fn(model_call_dict):
|
||||
print(f"model call details: {model_call_dict}")
|
||||
|
||||
## set ENV variables
|
||||
os.environ["OPENAI_API_KEY"] = "openai key"
|
||||
os.environ["COHERE_API_KEY"] = "cohere key"
|
||||
|
||||
messages = [{ "content": "Hello, how are you?","role": "user"}]
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages, logger_fn=my_custom_logging_fn)
|
||||
|
||||
# cohere call
|
||||
response = completion("command-nightly", messages, logger_fn=my_custom_logging_fn)
|
||||
```
|
||||
|
||||
## Still Seeing Issues?
|
||||
|
||||
Text us @ +17708783106 or Join the [Discord](https://discord.com/invite/wuPM9dRgDw).
|
||||
|
||||
We promise to help you in `lite`ning speed ❤️
|
|
@ -1,5 +1,13 @@
|
|||
# Embedding Models
|
||||
|
||||
## OpenAI Embedding Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|----------------------|---------------------------------------------|--------------------------------------|
|
||||
| text-embedding-ada-002 | `embedding('text-embedding-ada-002', input)` | `os.environ['OPENAI_API_KEY']` |
|
||||
| text-embedding-ada-002 | `embedding('text-embedding-ada-002', input)` | `os.environ['OPENAI_API_KEY']` |
|
||||
|
||||
## Azure OpenAI Embedding Models
|
||||
|
||||
| Model Name | Function Call | Required OS Variables |
|
||||
|----------------------|---------------------------------------------|--------------------------------------|
|
||||
| text-embedding-ada-002 | `embedding('embedding-model-deployment', input=input, custom_llm_provider="azure")` | `os.environ['AZURE_API_KEY']`,`os.environ['AZURE_API_BASE']`,`os.environ['AZURE_API_VERSION']` |
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
displayed_sidebar: tutorialSidebar
|
||||
---
|
||||
|
||||
# litellm
|
||||
[](https://pypi.org/project/litellm/)
|
||||
[](https://pypi.org/project/litellm/0.1.1/)
|
||||
|
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Tutorial Intro
|
||||
|
||||
Let's discover **Docusaurus in less than 5 minutes**.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Get started by **creating a new site**.
|
||||
|
||||
Or **try Docusaurus immediately** with **[docusaurus.new](https://docusaurus.new)**.
|
||||
|
||||
### What you'll need
|
||||
|
||||
- [Node.js](https://nodejs.org/en/download/) version 16.14 or above:
|
||||
- When installing Node.js, you are recommended to check all checkboxes related to dependencies.
|
||||
|
||||
## Generate a new site
|
||||
|
||||
Generate a new Docusaurus site using the **classic template**.
|
||||
|
||||
The classic template will automatically be added to your project after you run the command:
|
||||
|
||||
```bash
|
||||
npm init docusaurus@latest my-website classic
|
||||
```
|
||||
|
||||
You can type this command into Command Prompt, Powershell, Terminal, or any other integrated terminal of your code editor.
|
||||
|
||||
The command also installs all necessary dependencies you need to run Docusaurus.
|
||||
|
||||
## Start your site
|
||||
|
||||
Run the development server:
|
||||
|
||||
```bash
|
||||
cd my-website
|
||||
npm run start
|
||||
```
|
||||
|
||||
The `cd` command changes the directory you're working with. In order to work with your newly created Docusaurus site, you'll need to navigate the terminal there.
|
||||
|
||||
The `npm run start` command builds your website locally and serves it through a development server, ready for you to view at http://localhost:3000/.
|
||||
|
||||
Open `docs/intro.md` (this page) and edit some lines: the site **reloads automatically** and displays your changes.
|
|
@ -27,6 +27,19 @@ const config = {
|
|||
locales: ['en'],
|
||||
},
|
||||
|
||||
plugins: [
|
||||
[
|
||||
'@docusaurus/plugin-ideal-image',
|
||||
{
|
||||
quality: 70,
|
||||
max: 1030, // max resized image's size.
|
||||
min: 640, // min resized image's size. if original is lower, use that size.
|
||||
steps: 2, // the max number of images generated between min and max (inclusive)
|
||||
disableInDev: false,
|
||||
},
|
||||
],
|
||||
],
|
||||
|
||||
presets: [
|
||||
[
|
||||
'classic',
|
||||
|
@ -35,9 +48,7 @@ const config = {
|
|||
docs: {
|
||||
sidebarPath: require.resolve('./sidebars.js'),
|
||||
},
|
||||
blog: {
|
||||
showReadingTime: true,
|
||||
},
|
||||
blog: false, // Optional: disable the blog plugin
|
||||
theme: {
|
||||
customCss: require.resolve('./src/css/custom.css'),
|
||||
},
|
||||
|
@ -74,7 +85,7 @@ const config = {
|
|||
items: [
|
||||
{
|
||||
label: 'Tutorial',
|
||||
to: '/docs/intro',
|
||||
to: '/docs/index',
|
||||
},
|
||||
],
|
||||
},
|
||||
|
|
BIN
docs/my-website/img/dashboard.png
Normal file
BIN
docs/my-website/img/dashboard.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 534 KiB |
439
docs/my-website/package-lock.json
generated
439
docs/my-website/package-lock.json
generated
|
@ -9,12 +9,14 @@
|
|||
"version": "0.0.0",
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "2.4.1",
|
||||
"@docusaurus/plugin-ideal-image": "^2.4.1",
|
||||
"@docusaurus/preset-classic": "2.4.1",
|
||||
"@mdx-js/react": "^1.6.22",
|
||||
"clsx": "^1.2.1",
|
||||
"prism-react-renderer": "^1.3.5",
|
||||
"react": "^17.0.2",
|
||||
"react-dom": "^17.0.2"
|
||||
"react-dom": "^17.0.2",
|
||||
"uuid": "^9.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@docusaurus/module-type-aliases": "2.4.1"
|
||||
|
@ -2268,6 +2270,21 @@
|
|||
"node": ">=16.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@docusaurus/lqip-loader": {
|
||||
"version": "2.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@docusaurus/lqip-loader/-/lqip-loader-2.4.1.tgz",
|
||||
"integrity": "sha512-XJ0z/xSx5HtAQ+/xBoAiRZ7DY9zEP6IImAKlAk6RxuFzyB4HT8eINWN+LwLnOsTh5boIj37JCX+T76bH0ieULA==",
|
||||
"dependencies": {
|
||||
"@docusaurus/logger": "2.4.1",
|
||||
"file-loader": "^6.2.0",
|
||||
"lodash": "^4.17.21",
|
||||
"sharp": "^0.30.7",
|
||||
"tslib": "^2.4.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=16.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@docusaurus/mdx-loader": {
|
||||
"version": "2.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@docusaurus/mdx-loader/-/mdx-loader-2.4.1.tgz",
|
||||
|
@ -2474,6 +2491,37 @@
|
|||
"react-dom": "^16.8.4 || ^17.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@docusaurus/plugin-ideal-image": {
|
||||
"version": "2.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@docusaurus/plugin-ideal-image/-/plugin-ideal-image-2.4.1.tgz",
|
||||
"integrity": "sha512-jxvgCGPmHxdae2Y2uskzxIbMCA4WLTfzkufsLbD4mEAjCRIkt6yzux6q5kqKTrO+AxzpANVcJNGmaBtKZGv5aw==",
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "2.4.1",
|
||||
"@docusaurus/lqip-loader": "2.4.1",
|
||||
"@docusaurus/responsive-loader": "^1.7.0",
|
||||
"@docusaurus/theme-translations": "2.4.1",
|
||||
"@docusaurus/types": "2.4.1",
|
||||
"@docusaurus/utils-validation": "2.4.1",
|
||||
"@endiliey/react-ideal-image": "^0.0.11",
|
||||
"react-waypoint": "^10.3.0",
|
||||
"sharp": "^0.30.7",
|
||||
"tslib": "^2.4.0",
|
||||
"webpack": "^5.73.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=16.14"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"jimp": "*",
|
||||
"react": "^16.8.4 || ^17.0.0",
|
||||
"react-dom": "^16.8.4 || ^17.0.0"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"jimp": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@docusaurus/plugin-sitemap": {
|
||||
"version": "2.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.4.1.tgz",
|
||||
|
@ -2536,6 +2584,29 @@
|
|||
"react": "*"
|
||||
}
|
||||
},
|
||||
"node_modules/@docusaurus/responsive-loader": {
|
||||
"version": "1.7.0",
|
||||
"resolved": "https://registry.npmjs.org/@docusaurus/responsive-loader/-/responsive-loader-1.7.0.tgz",
|
||||
"integrity": "sha512-N0cWuVqTRXRvkBxeMQcy/OF2l7GN8rmni5EzR3HpwR+iU2ckYPnziceojcxvvxQ5NqZg1QfEW0tycQgHp+e+Nw==",
|
||||
"dependencies": {
|
||||
"loader-utils": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"jimp": "*",
|
||||
"sharp": "*"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"jimp": {
|
||||
"optional": true
|
||||
},
|
||||
"sharp": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@docusaurus/theme-classic": {
|
||||
"version": "2.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@docusaurus/theme-classic/-/theme-classic-2.4.1.tgz",
|
||||
|
@ -2734,6 +2805,20 @@
|
|||
"node": ">=16.14"
|
||||
}
|
||||
},
|
||||
"node_modules/@endiliey/react-ideal-image": {
|
||||
"version": "0.0.11",
|
||||
"resolved": "https://registry.npmjs.org/@endiliey/react-ideal-image/-/react-ideal-image-0.0.11.tgz",
|
||||
"integrity": "sha512-QxMjt/Gvur/gLxSoCy7VIyGGGrGmDN+VHcXkN3R2ApoWX0EYUE+hMgPHSW/PV6VVebZ1Nd4t2UnGRBDihu16JQ==",
|
||||
"engines": {
|
||||
"node": ">= 8.9.0",
|
||||
"npm": "> 3"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"prop-types": ">=15",
|
||||
"react": ">=0.14.x",
|
||||
"react-waypoint": ">=9.0.2"
|
||||
}
|
||||
},
|
||||
"node_modules/@hapi/hoek": {
|
||||
"version": "9.3.0",
|
||||
"resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-9.3.0.tgz",
|
||||
|
@ -4180,6 +4265,25 @@
|
|||
"resolved": "https://registry.npmjs.org/base16/-/base16-1.0.0.tgz",
|
||||
"integrity": "sha512-pNdYkNPiJUnEhnfXV56+sQy8+AaPcG3POZAUnwr4EeqCUZFz4u2PePbo3e5Gj4ziYPCWGUZT9RHisvJKnwFuBQ=="
|
||||
},
|
||||
"node_modules/base64-js": {
|
||||
"version": "1.5.1",
|
||||
"resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz",
|
||||
"integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
]
|
||||
},
|
||||
"node_modules/batch": {
|
||||
"version": "0.6.1",
|
||||
"resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz",
|
||||
|
@ -4201,6 +4305,16 @@
|
|||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/bl": {
|
||||
"version": "4.1.0",
|
||||
"resolved": "https://registry.npmjs.org/bl/-/bl-4.1.0.tgz",
|
||||
"integrity": "sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==",
|
||||
"dependencies": {
|
||||
"buffer": "^5.5.0",
|
||||
"inherits": "^2.0.4",
|
||||
"readable-stream": "^3.4.0"
|
||||
}
|
||||
},
|
||||
"node_modules/body-parser": {
|
||||
"version": "1.20.1",
|
||||
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.1.tgz",
|
||||
|
@ -4333,6 +4447,29 @@
|
|||
"node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"
|
||||
}
|
||||
},
|
||||
"node_modules/buffer": {
|
||||
"version": "5.7.1",
|
||||
"resolved": "https://registry.npmjs.org/buffer/-/buffer-5.7.1.tgz",
|
||||
"integrity": "sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
],
|
||||
"dependencies": {
|
||||
"base64-js": "^1.3.1",
|
||||
"ieee754": "^1.1.13"
|
||||
}
|
||||
},
|
||||
"node_modules/buffer-from": {
|
||||
"version": "1.1.2",
|
||||
"resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz",
|
||||
|
@ -4584,6 +4721,11 @@
|
|||
"fsevents": "~2.3.2"
|
||||
}
|
||||
},
|
||||
"node_modules/chownr": {
|
||||
"version": "1.1.4",
|
||||
"resolved": "https://registry.npmjs.org/chownr/-/chownr-1.1.4.tgz",
|
||||
"integrity": "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg=="
|
||||
},
|
||||
"node_modules/chrome-trace-event": {
|
||||
"version": "1.0.3",
|
||||
"resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz",
|
||||
|
@ -4709,6 +4851,18 @@
|
|||
"url": "https://github.com/sponsors/wooorm"
|
||||
}
|
||||
},
|
||||
"node_modules/color": {
|
||||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/color/-/color-4.2.3.tgz",
|
||||
"integrity": "sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==",
|
||||
"dependencies": {
|
||||
"color-convert": "^2.0.1",
|
||||
"color-string": "^1.9.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12.5.0"
|
||||
}
|
||||
},
|
||||
"node_modules/color-convert": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
|
||||
|
@ -4725,6 +4879,15 @@
|
|||
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
|
||||
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
|
||||
},
|
||||
"node_modules/color-string": {
|
||||
"version": "1.9.1",
|
||||
"resolved": "https://registry.npmjs.org/color-string/-/color-string-1.9.1.tgz",
|
||||
"integrity": "sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==",
|
||||
"dependencies": {
|
||||
"color-name": "^1.0.0",
|
||||
"simple-swizzle": "^0.2.2"
|
||||
}
|
||||
},
|
||||
"node_modules/colord": {
|
||||
"version": "2.9.3",
|
||||
"resolved": "https://registry.npmjs.org/colord/-/colord-2.9.3.tgz",
|
||||
|
@ -4853,6 +5016,11 @@
|
|||
"resolved": "https://registry.npmjs.org/consola/-/consola-2.15.3.tgz",
|
||||
"integrity": "sha512-9vAdYbHj6x2fLKC4+oPH0kFzY/orMZyG2Aj+kNylHxKGJ/Ed4dpNyAQYwJOdqO4zdM7XpVHmyejQDcQHrnuXbw=="
|
||||
},
|
||||
"node_modules/consolidated-events": {
|
||||
"version": "2.0.2",
|
||||
"resolved": "https://registry.npmjs.org/consolidated-events/-/consolidated-events-2.0.2.tgz",
|
||||
"integrity": "sha512-2/uRVMdRypf5z/TW/ncD/66l75P5hH2vM/GR8Jf8HLc2xnfJtmina6F6du8+v4Z2vTrMo7jC+W1tmEEuuELgkQ=="
|
||||
},
|
||||
"node_modules/content-disposition": {
|
||||
"version": "0.5.2",
|
||||
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.2.tgz",
|
||||
|
@ -5508,6 +5676,14 @@
|
|||
"url": "https://github.com/sponsors/wooorm"
|
||||
}
|
||||
},
|
||||
"node_modules/detect-libc": {
|
||||
"version": "2.0.2",
|
||||
"resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.0.2.tgz",
|
||||
"integrity": "sha512-UX6sGumvvqSaXgdKGUsgZWqcUyIXZ/vZTrlRT/iobiKhGL0zL4d3osHj3uqllWJK+i+sixDS/3COVEOFbupFyw==",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/detect-node": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz",
|
||||
|
@ -5936,6 +6112,14 @@
|
|||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/expand-template": {
|
||||
"version": "2.0.3",
|
||||
"resolved": "https://registry.npmjs.org/expand-template/-/expand-template-2.0.3.tgz",
|
||||
"integrity": "sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==",
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/express": {
|
||||
"version": "4.18.2",
|
||||
"resolved": "https://registry.npmjs.org/express/-/express-4.18.2.tgz",
|
||||
|
@ -6389,6 +6573,11 @@
|
|||
"node": ">= 0.6"
|
||||
}
|
||||
},
|
||||
"node_modules/fs-constants": {
|
||||
"version": "1.0.0",
|
||||
"resolved": "https://registry.npmjs.org/fs-constants/-/fs-constants-1.0.0.tgz",
|
||||
"integrity": "sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow=="
|
||||
},
|
||||
"node_modules/fs-extra": {
|
||||
"version": "10.1.0",
|
||||
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-10.1.0.tgz",
|
||||
|
@ -6468,6 +6657,11 @@
|
|||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/github-from-package": {
|
||||
"version": "0.0.0",
|
||||
"resolved": "https://registry.npmjs.org/github-from-package/-/github-from-package-0.0.0.tgz",
|
||||
"integrity": "sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw=="
|
||||
},
|
||||
"node_modules/github-slugger": {
|
||||
"version": "1.5.0",
|
||||
"resolved": "https://registry.npmjs.org/github-slugger/-/github-slugger-1.5.0.tgz",
|
||||
|
@ -7115,6 +7309,25 @@
|
|||
"postcss": "^8.1.0"
|
||||
}
|
||||
},
|
||||
"node_modules/ieee754": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz",
|
||||
"integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
]
|
||||
},
|
||||
"node_modules/ignore": {
|
||||
"version": "5.2.4",
|
||||
"resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz",
|
||||
|
@ -8133,6 +8346,11 @@
|
|||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/mkdirp-classic": {
|
||||
"version": "0.5.3",
|
||||
"resolved": "https://registry.npmjs.org/mkdirp-classic/-/mkdirp-classic-0.5.3.tgz",
|
||||
"integrity": "sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A=="
|
||||
},
|
||||
"node_modules/mrmime": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/mrmime/-/mrmime-1.0.1.tgz",
|
||||
|
@ -8175,6 +8393,11 @@
|
|||
"node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1"
|
||||
}
|
||||
},
|
||||
"node_modules/napi-build-utils": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/napi-build-utils/-/napi-build-utils-1.0.2.tgz",
|
||||
"integrity": "sha512-ONmRUqK7zj7DWX0D9ADe03wbwOBZxNAfF20PlGfCWQcD3+/MakShIHrMqx9YwPTfxDdF1zLeL+RGZiR9kGMLdg=="
|
||||
},
|
||||
"node_modules/negotiator": {
|
||||
"version": "0.6.3",
|
||||
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz",
|
||||
|
@ -8197,6 +8420,22 @@
|
|||
"tslib": "^2.0.3"
|
||||
}
|
||||
},
|
||||
"node_modules/node-abi": {
|
||||
"version": "3.47.0",
|
||||
"resolved": "https://registry.npmjs.org/node-abi/-/node-abi-3.47.0.tgz",
|
||||
"integrity": "sha512-2s6B2CWZM//kPgwnuI0KrYwNjfdByE25zvAaEpq9IH4zcNsarH8Ihu/UuX6XMPEogDAxkuUFeZn60pXNHAqn3A==",
|
||||
"dependencies": {
|
||||
"semver": "^7.3.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
}
|
||||
},
|
||||
"node_modules/node-addon-api": {
|
||||
"version": "5.1.0",
|
||||
"resolved": "https://registry.npmjs.org/node-addon-api/-/node-addon-api-5.1.0.tgz",
|
||||
"integrity": "sha512-eh0GgfEkpnoWDq+VY8OyvYhFEzBk6jIYbRKdIlyTiAXIVJ8PyBaKb0rp7oDtoddbdoHWhq8wwr+XZ81F1rpNdA=="
|
||||
},
|
||||
"node_modules/node-emoji": {
|
||||
"version": "1.11.0",
|
||||
"resolved": "https://registry.npmjs.org/node-emoji/-/node-emoji-1.11.0.tgz",
|
||||
|
@ -9303,6 +9542,31 @@
|
|||
"postcss": "^8.2.15"
|
||||
}
|
||||
},
|
||||
"node_modules/prebuild-install": {
|
||||
"version": "7.1.1",
|
||||
"resolved": "https://registry.npmjs.org/prebuild-install/-/prebuild-install-7.1.1.tgz",
|
||||
"integrity": "sha512-jAXscXWMcCK8GgCoHOfIr0ODh5ai8mj63L2nWrjuAgXE6tDyYGnx4/8o/rCgU+B4JSyZBKbeZqzhtwtC3ovxjw==",
|
||||
"dependencies": {
|
||||
"detect-libc": "^2.0.0",
|
||||
"expand-template": "^2.0.3",
|
||||
"github-from-package": "0.0.0",
|
||||
"minimist": "^1.2.3",
|
||||
"mkdirp-classic": "^0.5.3",
|
||||
"napi-build-utils": "^1.0.1",
|
||||
"node-abi": "^3.3.0",
|
||||
"pump": "^3.0.0",
|
||||
"rc": "^1.2.7",
|
||||
"simple-get": "^4.0.0",
|
||||
"tar-fs": "^2.0.0",
|
||||
"tunnel-agent": "^0.6.0"
|
||||
},
|
||||
"bin": {
|
||||
"prebuild-install": "bin.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
}
|
||||
},
|
||||
"node_modules/prepend-http": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/prepend-http/-/prepend-http-2.0.0.tgz",
|
||||
|
@ -9820,6 +10084,25 @@
|
|||
"react": "^16.8.0 || ^17.0.0 || ^18.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/react-waypoint": {
|
||||
"version": "10.3.0",
|
||||
"resolved": "https://registry.npmjs.org/react-waypoint/-/react-waypoint-10.3.0.tgz",
|
||||
"integrity": "sha512-iF1y2c1BsoXuEGz08NoahaLFIGI9gTUAAOKip96HUmylRT6DUtpgoBPjk/Y8dfcFVmfVDvUzWjNXpZyKTOV0SQ==",
|
||||
"dependencies": {
|
||||
"@babel/runtime": "^7.12.5",
|
||||
"consolidated-events": "^1.1.0 || ^2.0.0",
|
||||
"prop-types": "^15.0.0",
|
||||
"react-is": "^17.0.1 || ^18.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"react": "^15.3.0 || ^16.0.0 || ^17.0.0 || ^18.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/react-waypoint/node_modules/react-is": {
|
||||
"version": "18.2.0",
|
||||
"resolved": "https://registry.npmjs.org/react-is/-/react-is-18.2.0.tgz",
|
||||
"integrity": "sha512-xWGDIW6x921xtzPkhiULtthJHoJvBbF3q26fzloPCK0hsvxtPVelvftw3zjbHWSkR2km9Z+4uxbDDK/6Zw9B8w=="
|
||||
},
|
||||
"node_modules/readable-stream": {
|
||||
"version": "3.6.2",
|
||||
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz",
|
||||
|
@ -10745,6 +11028,28 @@
|
|||
"resolved": "https://registry.npmjs.org/shallowequal/-/shallowequal-1.1.0.tgz",
|
||||
"integrity": "sha512-y0m1JoUZSlPAjXVtPPW70aZWfIL/dSP7AFkRnniLCrK/8MDKog3TySTBmckD+RObVxH0v4Tox67+F14PdED2oQ=="
|
||||
},
|
||||
"node_modules/sharp": {
|
||||
"version": "0.30.7",
|
||||
"resolved": "https://registry.npmjs.org/sharp/-/sharp-0.30.7.tgz",
|
||||
"integrity": "sha512-G+MY2YW33jgflKPTXXptVO28HvNOo9G3j0MybYAHeEmby+QuD2U98dT6ueht9cv/XDqZspSpIhoSW+BAKJ7Hig==",
|
||||
"hasInstallScript": true,
|
||||
"dependencies": {
|
||||
"color": "^4.2.3",
|
||||
"detect-libc": "^2.0.1",
|
||||
"node-addon-api": "^5.0.0",
|
||||
"prebuild-install": "^7.1.1",
|
||||
"semver": "^7.3.7",
|
||||
"simple-get": "^4.0.1",
|
||||
"tar-fs": "^2.1.1",
|
||||
"tunnel-agent": "^0.6.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12.13.0"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://opencollective.com/libvips"
|
||||
}
|
||||
},
|
||||
"node_modules/shebang-command": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
|
||||
|
@ -10806,6 +11111,87 @@
|
|||
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz",
|
||||
"integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="
|
||||
},
|
||||
"node_modules/simple-concat": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/simple-concat/-/simple-concat-1.0.1.tgz",
|
||||
"integrity": "sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
]
|
||||
},
|
||||
"node_modules/simple-get": {
|
||||
"version": "4.0.1",
|
||||
"resolved": "https://registry.npmjs.org/simple-get/-/simple-get-4.0.1.tgz",
|
||||
"integrity": "sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/feross"
|
||||
},
|
||||
{
|
||||
"type": "patreon",
|
||||
"url": "https://www.patreon.com/feross"
|
||||
},
|
||||
{
|
||||
"type": "consulting",
|
||||
"url": "https://feross.org/support"
|
||||
}
|
||||
],
|
||||
"dependencies": {
|
||||
"decompress-response": "^6.0.0",
|
||||
"once": "^1.3.1",
|
||||
"simple-concat": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-get/node_modules/decompress-response": {
|
||||
"version": "6.0.0",
|
||||
"resolved": "https://registry.npmjs.org/decompress-response/-/decompress-response-6.0.0.tgz",
|
||||
"integrity": "sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ==",
|
||||
"dependencies": {
|
||||
"mimic-response": "^3.1.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-get/node_modules/mimic-response": {
|
||||
"version": "3.1.0",
|
||||
"resolved": "https://registry.npmjs.org/mimic-response/-/mimic-response-3.1.0.tgz",
|
||||
"integrity": "sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ==",
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-swizzle": {
|
||||
"version": "0.2.2",
|
||||
"resolved": "https://registry.npmjs.org/simple-swizzle/-/simple-swizzle-0.2.2.tgz",
|
||||
"integrity": "sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==",
|
||||
"dependencies": {
|
||||
"is-arrayish": "^0.3.1"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-swizzle/node_modules/is-arrayish": {
|
||||
"version": "0.3.2",
|
||||
"resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.3.2.tgz",
|
||||
"integrity": "sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ=="
|
||||
},
|
||||
"node_modules/sirv": {
|
||||
"version": "1.0.19",
|
||||
"resolved": "https://registry.npmjs.org/sirv/-/sirv-1.0.19.tgz",
|
||||
|
@ -10865,6 +11251,14 @@
|
|||
"websocket-driver": "^0.7.4"
|
||||
}
|
||||
},
|
||||
"node_modules/sockjs/node_modules/uuid": {
|
||||
"version": "8.3.2",
|
||||
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
|
||||
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
|
||||
"bin": {
|
||||
"uuid": "dist/bin/uuid"
|
||||
}
|
||||
},
|
||||
"node_modules/sort-css-media-queries": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/sort-css-media-queries/-/sort-css-media-queries-2.1.0.tgz",
|
||||
|
@ -11217,6 +11611,32 @@
|
|||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/tar-fs": {
|
||||
"version": "2.1.1",
|
||||
"resolved": "https://registry.npmjs.org/tar-fs/-/tar-fs-2.1.1.tgz",
|
||||
"integrity": "sha512-V0r2Y9scmbDRLCNex/+hYzvp/zyYjvFbHPNgVTKfQvVrb6guiE/fxP+XblDNR011utopbkex2nM4dHNV6GDsng==",
|
||||
"dependencies": {
|
||||
"chownr": "^1.1.1",
|
||||
"mkdirp-classic": "^0.5.2",
|
||||
"pump": "^3.0.0",
|
||||
"tar-stream": "^2.1.4"
|
||||
}
|
||||
},
|
||||
"node_modules/tar-stream": {
|
||||
"version": "2.2.0",
|
||||
"resolved": "https://registry.npmjs.org/tar-stream/-/tar-stream-2.2.0.tgz",
|
||||
"integrity": "sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==",
|
||||
"dependencies": {
|
||||
"bl": "^4.0.3",
|
||||
"end-of-stream": "^1.4.1",
|
||||
"fs-constants": "^1.0.0",
|
||||
"inherits": "^2.0.3",
|
||||
"readable-stream": "^3.1.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/terser": {
|
||||
"version": "5.19.2",
|
||||
"resolved": "https://registry.npmjs.org/terser/-/terser-5.19.2.tgz",
|
||||
|
@ -11413,6 +11833,17 @@
|
|||
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.6.1.tgz",
|
||||
"integrity": "sha512-t0hLfiEKfMUoqhG+U1oid7Pva4bbDPHYfJNiB7BiIjRkj1pyC++4N3huJfqY6aRH6VTB0rvtzQwjM4K6qpfOig=="
|
||||
},
|
||||
"node_modules/tunnel-agent": {
|
||||
"version": "0.6.0",
|
||||
"resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.6.0.tgz",
|
||||
"integrity": "sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w==",
|
||||
"dependencies": {
|
||||
"safe-buffer": "^5.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": "*"
|
||||
}
|
||||
},
|
||||
"node_modules/type-fest": {
|
||||
"version": "2.19.0",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-2.19.0.tgz",
|
||||
|
@ -11991,9 +12422,9 @@
|
|||
}
|
||||
},
|
||||
"node_modules/uuid": {
|
||||
"version": "8.3.2",
|
||||
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
|
||||
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
|
||||
"version": "9.0.0",
|
||||
"resolved": "https://registry.npmjs.org/uuid/-/uuid-9.0.0.tgz",
|
||||
"integrity": "sha512-MXcSTerfPa4uqyzStbRoTgt5XIe3x5+42+q1sDuy3R5MDk66URdLMOZe5aPX/SQd+kuYAh0FdP/pO28IkQyTeg==",
|
||||
"bin": {
|
||||
"uuid": "dist/bin/uuid"
|
||||
}
|
||||
|
|
|
@ -15,12 +15,14 @@
|
|||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "2.4.1",
|
||||
"@docusaurus/plugin-ideal-image": "^2.4.1",
|
||||
"@docusaurus/preset-classic": "2.4.1",
|
||||
"@mdx-js/react": "^1.6.22",
|
||||
"clsx": "^1.2.1",
|
||||
"prism-react-renderer": "^1.3.5",
|
||||
"react": "^17.0.2",
|
||||
"react-dom": "^17.0.2"
|
||||
"react-dom": "^17.0.2",
|
||||
"uuid": "^9.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@docusaurus/module-type-aliases": "2.4.1"
|
||||
|
|
|
@ -14,11 +14,10 @@
|
|||
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
|
||||
const sidebars = {
|
||||
// // By default, Docusaurus generates a sidebar from the docs folder structure
|
||||
// tutorialSidebar: [{type: 'autogenerated', dirName: '.'}],
|
||||
|
||||
// But you can create a sidebar manually
|
||||
tutorialSidebar: [
|
||||
'index',
|
||||
{ type: "doc", id: "index" }, // NEW
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Completion()',
|
||||
|
@ -30,6 +29,8 @@ const sidebars = {
|
|||
items: ['embedding/supported_embedding'],
|
||||
},
|
||||
'completion/supported',
|
||||
'debugging/local_debugging',
|
||||
'debugging/hosted_debugging',
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Tutorials',
|
||||
|
|
|
@ -1,4 +1,8 @@
|
|||
# *🚅 litellm*
|
||||
---
|
||||
displayed_sidebar: tutorialSidebar
|
||||
---
|
||||
|
||||
# litellm
|
||||
[](https://pypi.org/project/litellm/)
|
||||
[](https://pypi.org/project/litellm/0.1.1/)
|
||||
[](https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main)
|
||||
|
|
8013
docs/my-website/yarn.lock
Normal file
8013
docs/my-website/yarn.lock
Normal file
File diff suppressed because it is too large
Load diff
|
@ -4,6 +4,7 @@ input_callback: List[str] = []
|
|||
success_callback: List[str] = []
|
||||
failure_callback: List[str] = []
|
||||
set_verbose = False
|
||||
debugger_email = None # for debugging dashboard. Learn more - https://docs.litellm.ai/docs/debugging/hosted_debugging
|
||||
telemetry = True
|
||||
max_tokens = 256 # OpenAI Defaults
|
||||
retry = True
|
||||
|
@ -17,7 +18,6 @@ openrouter_key: Optional[str] = None
|
|||
huggingface_key: Optional[str] = None
|
||||
vertex_project: Optional[str] = None
|
||||
vertex_location: Optional[str] = None
|
||||
hugging_api_token: Optional[str] = None
|
||||
togetherai_api_key: Optional[str] = None
|
||||
caching = False
|
||||
caching_with_models = False # if you want the caching key to be model + prompt
|
||||
|
@ -148,7 +148,15 @@ cohere_models = [
|
|||
anthropic_models = ["claude-2", "claude-instant-1", "claude-instant-1.2"]
|
||||
|
||||
replicate_models = [
|
||||
"replicate/"
|
||||
"replicate/",
|
||||
"replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781",
|
||||
"a16z-infra/llama-2-13b-chat:2a7f981751ec7fdf87b5b91ad4db53683a98082e9ff7bfd12c8cd5ea85980a52",
|
||||
"joehoover/instructblip-vicuna13b:c4c54e3c8c97cd50c2d2fec9be3b6065563ccf7d43787fb99f84151b867178fe"
|
||||
"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5",
|
||||
"a16z-infra/llama-2-7b-chat:7b0bfc9aff140d5b75bacbed23e91fd3c34b01a1e958d32132de6e0a19796e2c",
|
||||
"replicate/vicuna-13b:6282abe6a492de4145d7bb601023762212f9ddbbe78278bd6771c8b3b2f2a13b",
|
||||
"daanelson/flan-t5-large:ce962b3f6792a57074a601d3979db5839697add2e4e02696b3ced4c022d4767f",
|
||||
"replit/replit-code-v1-3b:b84f4c074b807211cd75e3e8b1589b6399052125b4c27106e43d47189e8415ad"
|
||||
] # placeholder, to make sure we accept any replicate model in our model_list
|
||||
|
||||
openrouter_models = [
|
||||
|
@ -185,6 +193,18 @@ huggingface_models = [
|
|||
|
||||
ai21_models = ["j2-ultra", "j2-mid", "j2-light"]
|
||||
|
||||
together_ai_models = [
|
||||
"togethercomputer/llama-2-70b-chat",
|
||||
"togethercomputer/Llama-2-7B-32K-Instruct",
|
||||
"togethercomputer/llama-2-7b"
|
||||
]
|
||||
|
||||
baseten_models = [
|
||||
"qvv0xeq", # FALCON 7B
|
||||
"q841o8w", # WizardLM
|
||||
"31dxrj3" # Mosaic ML
|
||||
]
|
||||
|
||||
model_list = (
|
||||
open_ai_chat_completion_models
|
||||
+ open_ai_text_completion_models
|
||||
|
@ -196,10 +216,13 @@ model_list = (
|
|||
+ vertex_chat_models
|
||||
+ vertex_text_models
|
||||
+ ai21_models
|
||||
+ together_ai_models
|
||||
+ baseten_models
|
||||
)
|
||||
|
||||
provider_list = [
|
||||
"openai",
|
||||
"azure",
|
||||
"cohere",
|
||||
"anthropic",
|
||||
"replicate",
|
||||
|
@ -208,7 +231,23 @@ provider_list = [
|
|||
"openrouter",
|
||||
"vertex_ai",
|
||||
"ai21",
|
||||
"baseten"
|
||||
]
|
||||
|
||||
models_by_provider = {
|
||||
"openai": open_ai_chat_completion_models
|
||||
+ open_ai_text_completion_models,
|
||||
"cohere": cohere_models,
|
||||
"anthropic": anthropic_models,
|
||||
"replicate": replicate_models,
|
||||
"huggingface": huggingface_models,
|
||||
"together_ai": together_ai_models,
|
||||
"baseten": baseten_models,
|
||||
"openrouter": openrouter_models,
|
||||
"vertex_ai": vertex_chat_models + vertex_text_models,
|
||||
"ai21": ai21_models,
|
||||
}
|
||||
|
||||
####### EMBEDDING MODELS ###################
|
||||
open_ai_embedding_models = ["text-embedding-ada-002"]
|
||||
|
||||
|
@ -223,7 +262,8 @@ from .utils import (
|
|||
cost_per_token,
|
||||
completion_cost,
|
||||
get_litellm_params,
|
||||
Logging
|
||||
Logging,
|
||||
acreate
|
||||
)
|
||||
from .main import * # type: ignore
|
||||
from .integrations import *
|
||||
|
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
92
litellm/integrations/litedebugger.py
Normal file
92
litellm/integrations/litedebugger.py
Normal file
|
@ -0,0 +1,92 @@
|
|||
import requests, traceback, json, os
|
||||
|
||||
class LiteDebugger:
|
||||
user_email = None
|
||||
dashboard_url = None
|
||||
def __init__(self, email=None):
|
||||
self.api_url = "https://api.litellm.ai/debugger"
|
||||
self.validate_environment(email)
|
||||
pass
|
||||
|
||||
def validate_environment(self, email):
|
||||
try:
|
||||
self.user_email = os.getenv("LITELLM_EMAIL") or email
|
||||
self.dashboard_url = 'https://admin.litellm.ai/' + self.user_email
|
||||
print(f"Here's your free Dashboard 👉 {self.dashboard_url}")
|
||||
if self.user_email == None:
|
||||
raise Exception("[Non-Blocking Error] LiteLLMDebugger: Missing LITELLM_EMAIL. Set it in your environment. Eg.: os.environ['LITELLM_EMAIL']= <your_email>")
|
||||
except Exception as e:
|
||||
raise ValueError("[Non-Blocking Error] LiteLLMDebugger: Missing LITELLM_EMAIL. Set it in your environment. Eg.: os.environ['LITELLM_EMAIL']= <your_email>")
|
||||
|
||||
|
||||
def input_log_event(self, model, messages, end_user, litellm_call_id, print_verbose):
|
||||
try:
|
||||
print_verbose(
|
||||
f"LiteLLMDebugger: Logging - Enters input logging function for model {model}"
|
||||
)
|
||||
litellm_data_obj = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"end_user": end_user,
|
||||
"status": "initiated",
|
||||
"litellm_call_id": litellm_call_id,
|
||||
"user_email": self.user_email
|
||||
}
|
||||
response = requests.post(url=self.api_url, headers={"content-type": "application/json"}, data=json.dumps(litellm_data_obj))
|
||||
print_verbose(f"LiteDebugger: api response - {response.text}")
|
||||
except:
|
||||
print_verbose(f"[Non-Blocking Error] LiteDebugger: Logging Error - {traceback.format_exc()}")
|
||||
pass
|
||||
|
||||
def log_event(self, model,
|
||||
messages,
|
||||
end_user,
|
||||
response_obj,
|
||||
start_time,
|
||||
end_time,
|
||||
litellm_call_id,
|
||||
print_verbose,):
|
||||
try:
|
||||
print_verbose(
|
||||
f"LiteLLMDebugger: Logging - Enters input logging function for model {model}"
|
||||
)
|
||||
total_cost = 0 # [TODO] implement cost tracking
|
||||
response_time = (end_time - start_time).total_seconds()
|
||||
if "choices" in response_obj:
|
||||
litellm_data_obj = {
|
||||
"response_time": response_time,
|
||||
"model": response_obj["model"],
|
||||
"total_cost": total_cost,
|
||||
"messages": messages,
|
||||
"response": response_obj["choices"][0]["message"]["content"],
|
||||
"end_user": end_user,
|
||||
"litellm_call_id": litellm_call_id,
|
||||
"status": "success",
|
||||
"user_email": self.user_email
|
||||
}
|
||||
print_verbose(
|
||||
f"LiteDebugger: Logging - final data object: {litellm_data_obj}"
|
||||
)
|
||||
response = requests.post(url=self.api_url, headers={"content-type": "application/json"}, data=json.dumps(litellm_data_obj))
|
||||
elif "error" in response_obj:
|
||||
if "Unable to map your input to a model." in response_obj["error"]:
|
||||
total_cost = 0
|
||||
litellm_data_obj = {
|
||||
"response_time": response_time,
|
||||
"model": response_obj["model"],
|
||||
"total_cost": total_cost,
|
||||
"messages": messages,
|
||||
"error": response_obj["error"],
|
||||
"end_user": end_user,
|
||||
"litellm_call_id": litellm_call_id,
|
||||
"status": "failure",
|
||||
"user_email": self.user_email
|
||||
}
|
||||
print_verbose(
|
||||
f"LiteDebugger: Logging - final data object: {litellm_data_obj}"
|
||||
)
|
||||
response = requests.post(url=self.api_url, headers={"content-type": "application/json"}, data=json.dumps(litellm_data_obj))
|
||||
print_verbose(f"LiteDebugger: api response - {response.text}")
|
||||
except:
|
||||
print_verbose(f"[Non-Blocking Error] LiteDebugger: Logging Error - {traceback.format_exc()}")
|
||||
pass
|
|
@ -162,8 +162,8 @@ class Supabase:
|
|||
.execute()
|
||||
)
|
||||
print(f"data: {data}")
|
||||
pass
|
||||
except:
|
||||
print_verbose(f"Supabase Logging Error - {traceback.format_exc()}")
|
||||
pass
|
||||
|
||||
def log_event(
|
||||
|
|
|
@ -91,6 +91,7 @@ def completion(
|
|||
top_k=40,
|
||||
request_timeout=0, # unused var for old version of OpenAI API
|
||||
) -> ModelResponse:
|
||||
args = locals()
|
||||
try:
|
||||
model_response = ModelResponse()
|
||||
if azure: # this flag is deprecated, remove once notebooks are also updated.
|
||||
|
@ -100,7 +101,6 @@ def completion(
|
|||
model = model.split("/", 1)[1]
|
||||
if "replicate" == custom_llm_provider and "/" not in model: # handle the "replicate/llama2..." edge-case
|
||||
model = custom_llm_provider + "/" + model
|
||||
args = locals()
|
||||
# check if user passed in any of the OpenAI optional params
|
||||
optional_params = get_optional_params(
|
||||
functions=functions,
|
||||
|
@ -153,7 +153,7 @@ def completion(
|
|||
# set key
|
||||
openai.api_key = api_key
|
||||
## LOGGING
|
||||
logging.pre_call(input=messages, api_key=openai.api_key, additional_args={"headers": litellm.headers, "api_version": openai.api_version, "api_base": openai.api_base})
|
||||
logging.pre_call(input=messages, api_key=openai.api_key, additional_args={"litellm.headers": litellm.headers, "api_version": openai.api_version, "api_base": openai.api_base})
|
||||
## COMPLETION CALL
|
||||
if litellm.headers:
|
||||
response = openai.ChatCompletion.create(
|
||||
|
@ -164,8 +164,9 @@ def completion(
|
|||
)
|
||||
else:
|
||||
response = openai.ChatCompletion.create(
|
||||
model=model, messages=messages, **optional_params
|
||||
engine=model, messages=messages, **optional_params
|
||||
)
|
||||
|
||||
## LOGGING
|
||||
logging.post_call(input=messages, api_key=openai.api_key, original_response=response, additional_args={"headers": litellm.headers, "api_version": openai.api_version, "api_base": openai.api_base})
|
||||
elif (
|
||||
|
@ -186,7 +187,7 @@ def completion(
|
|||
# set API KEY
|
||||
if not api_key and litellm.openai_key:
|
||||
api_key = litellm.openai_key
|
||||
elif not api_key and get_secret("AZURE_API_KEY"):
|
||||
elif not api_key and get_secret("OPENAI_API_KEY"):
|
||||
api_key = get_secret("OPENAI_API_KEY")
|
||||
|
||||
openai.api_key = api_key
|
||||
|
@ -218,7 +219,7 @@ def completion(
|
|||
# set API KEY
|
||||
if not api_key and litellm.openai_key:
|
||||
api_key = litellm.openai_key
|
||||
elif not api_key and get_secret("AZURE_API_KEY"):
|
||||
elif not api_key and get_secret("OPENAI_API_KEY"):
|
||||
api_key = get_secret("OPENAI_API_KEY")
|
||||
|
||||
openai.api_key = api_key
|
||||
|
@ -637,7 +638,6 @@ def completion(
|
|||
model_response["model"] = model
|
||||
response = model_response
|
||||
else:
|
||||
args = locals()
|
||||
raise ValueError(
|
||||
f"Unable to map your input to a model. Check your input - {args}"
|
||||
)
|
||||
|
@ -707,9 +707,10 @@ def embedding(model, input=[], azure=False, force_timeout=60, litellm_call_id=No
|
|||
|
||||
return response
|
||||
except Exception as e:
|
||||
## LOGGING
|
||||
logging.post_call(input=input, api_key=openai.api_key, original_response=e)
|
||||
## Map to OpenAI Exception
|
||||
raise exception_type(model=model, original_exception=e, custom_llm_provider="azure" if azure==True else None)
|
||||
raise e
|
||||
|
||||
|
||||
####### HELPER FUNCTIONS ################
|
||||
|
|
|
@ -9,7 +9,7 @@ import asyncio
|
|||
sys.path.insert(
|
||||
0, os.path.abspath("../..")
|
||||
) # Adds the parent directory to the system path
|
||||
from litellm import acompletion
|
||||
from litellm import acompletion, acreate
|
||||
|
||||
|
||||
async def test_get_response():
|
||||
|
@ -24,3 +24,16 @@ async def test_get_response():
|
|||
|
||||
response = asyncio.run(test_get_response())
|
||||
print(response)
|
||||
|
||||
# async def test_get_response():
|
||||
# user_message = "Hello, how are you?"
|
||||
# messages = [{"content": user_message, "role": "user"}]
|
||||
# try:
|
||||
# response = await acreate(model="gpt-3.5-turbo", messages=messages)
|
||||
# except Exception as e:
|
||||
# pytest.fail(f"error occurred: {e}")
|
||||
# return response
|
||||
|
||||
|
||||
# response = asyncio.run(test_get_response())
|
||||
# print(response)
|
||||
|
|
|
@ -254,8 +254,7 @@ def test_completion_openai_with_functions():
|
|||
def test_completion_azure():
|
||||
try:
|
||||
response = completion(
|
||||
model="gpt-3.5-turbo",
|
||||
deployment_id="chatgpt-test",
|
||||
model="chatgpt-test",
|
||||
messages=messages,
|
||||
custom_llm_provider="azure",
|
||||
)
|
||||
|
|
22
litellm/tests/test_litedebugger_integration.py
Normal file
22
litellm/tests/test_litedebugger_integration.py
Normal file
|
@ -0,0 +1,22 @@
|
|||
# #### What this tests ####
|
||||
# # This tests if logging to the litedebugger integration actually works
|
||||
# # pytest mistakes intentional bad calls as failed tests -> [TODO] fix this
|
||||
# import sys, os
|
||||
# import traceback
|
||||
# import pytest
|
||||
|
||||
# sys.path.insert(0, os.path.abspath('../..')) # Adds the parent directory to the system path
|
||||
# import litellm
|
||||
# from litellm import embedding, completion
|
||||
|
||||
# litellm.debugger = True
|
||||
|
||||
# user_message = "Hello, how are you?"
|
||||
# messages = [{ "content": user_message,"role": "user"}]
|
||||
|
||||
|
||||
# #openai call
|
||||
# response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
||||
|
||||
# #bad request call
|
||||
# response = completion(model="chatgpt-test", messages=[{"role": "user", "content": "Hi 👋 - i'm a bad request"}])
|
|
@ -22,6 +22,32 @@ def logger_fn(model_call_object: dict):
|
|||
user_message = "Hello, how are you?"
|
||||
messages = [{"content": user_message, "role": "user"}]
|
||||
|
||||
# test on openai completion call
|
||||
try:
|
||||
response = completion(
|
||||
model="gpt-3.5-turbo", messages=messages, stream=True, logger_fn=logger_fn
|
||||
)
|
||||
for chunk in response:
|
||||
print(chunk["choices"][0]["delta"])
|
||||
score += 1
|
||||
except:
|
||||
print(f"error occurred: {traceback.format_exc()}")
|
||||
pass
|
||||
|
||||
|
||||
# test on azure completion call
|
||||
try:
|
||||
response = completion(
|
||||
model="azure/chatgpt-test", messages=messages, stream=True, logger_fn=logger_fn
|
||||
)
|
||||
for chunk in response:
|
||||
print(chunk["choices"][0]["delta"])
|
||||
score += 1
|
||||
except:
|
||||
print(f"error occurred: {traceback.format_exc()}")
|
||||
pass
|
||||
|
||||
|
||||
# test on anthropic completion call
|
||||
try:
|
||||
response = completion(
|
||||
|
@ -35,19 +61,19 @@ except:
|
|||
pass
|
||||
|
||||
|
||||
# test on anthropic completion call
|
||||
try:
|
||||
response = completion(
|
||||
model="meta-llama/Llama-2-7b-chat-hf",
|
||||
messages=messages,
|
||||
custom_llm_provider="huggingface",
|
||||
custom_api_base="https://s7c7gytn18vnu4tw.us-east-1.aws.endpoints.huggingface.cloud",
|
||||
stream=True,
|
||||
logger_fn=logger_fn,
|
||||
)
|
||||
for chunk in response:
|
||||
print(chunk["choices"][0]["delta"])
|
||||
score += 1
|
||||
except:
|
||||
print(f"error occurred: {traceback.format_exc()}")
|
||||
pass
|
||||
# # test on huggingface completion call
|
||||
# try:
|
||||
# response = completion(
|
||||
# model="meta-llama/Llama-2-7b-chat-hf",
|
||||
# messages=messages,
|
||||
# custom_llm_provider="huggingface",
|
||||
# custom_api_base="https://s7c7gytn18vnu4tw.us-east-1.aws.endpoints.huggingface.cloud",
|
||||
# stream=True,
|
||||
# logger_fn=logger_fn,
|
||||
# )
|
||||
# for chunk in response:
|
||||
# print(chunk["choices"][0]["delta"])
|
||||
# score += 1
|
||||
# except:
|
||||
# print(f"error occurred: {traceback.format_exc()}")
|
||||
# pass
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# #### What this tests ####
|
||||
# # This tests if logging to the helicone integration actually works
|
||||
# # This tests if logging to the supabase integration actually works
|
||||
# # pytest mistakes intentional bad calls as failed tests -> [TODO] fix this
|
||||
# import sys, os
|
||||
# import traceback
|
||||
|
@ -13,7 +13,7 @@
|
|||
# litellm.success_callback = ["supabase"]
|
||||
# litellm.failure_callback = ["supabase"]
|
||||
|
||||
# litellm.modify_integration("supabase",{"table_name": "test_table"})
|
||||
# # litellm.modify_integration("supabase",{"table_name": "test_table"})
|
||||
|
||||
# litellm.set_verbose = True
|
||||
|
||||
|
|
118
litellm/utils.py
118
litellm/utils.py
|
@ -12,6 +12,7 @@ from .integrations.helicone import HeliconeLogger
|
|||
from .integrations.aispend import AISpendLogger
|
||||
from .integrations.berrispend import BerriSpendLogger
|
||||
from .integrations.supabase import Supabase
|
||||
from .integrations.litedebugger import LiteDebugger
|
||||
from openai.error import OpenAIError as OriginalError
|
||||
from openai.openai_object import OpenAIObject
|
||||
from .exceptions import (
|
||||
|
@ -35,6 +36,7 @@ heliconeLogger = None
|
|||
aispendLogger = None
|
||||
berrispendLogger = None
|
||||
supabaseClient = None
|
||||
liteDebuggerClient = None
|
||||
callback_list: Optional[List[str]] = []
|
||||
user_logger_fn = None
|
||||
additional_details: Optional[Dict[str, str]] = {}
|
||||
|
@ -136,6 +138,7 @@ def install_and_import(package: str):
|
|||
####### LOGGING ###################
|
||||
# Logging function -> log the exact model details + what's being sent | Non-Blocking
|
||||
class Logging:
|
||||
global supabaseClient, liteDebuggerClient
|
||||
def __init__(self, model, messages, optional_params, litellm_params):
|
||||
self.model = model
|
||||
self.messages = messages
|
||||
|
@ -151,7 +154,7 @@ class Logging:
|
|||
|
||||
def pre_call(self, input, api_key, additional_args={}):
|
||||
try:
|
||||
print(f"logging pre call for model: {self.model}")
|
||||
print_verbose(f"logging pre call for model: {self.model}")
|
||||
self.model_call_details["input"] = input
|
||||
self.model_call_details["api_key"] = api_key
|
||||
self.model_call_details["additional_args"] = additional_args
|
||||
|
@ -177,7 +180,7 @@ class Logging:
|
|||
print_verbose("reaches supabase for logging!")
|
||||
model = self.model
|
||||
messages = self.messages
|
||||
print(f"litellm._thread_context: {litellm._thread_context}")
|
||||
print(f"supabaseClient: {supabaseClient}")
|
||||
supabaseClient.input_log_event(
|
||||
model=model,
|
||||
messages=messages,
|
||||
|
@ -185,14 +188,34 @@ class Logging:
|
|||
litellm_call_id=self.litellm_params["litellm_call_id"],
|
||||
print_verbose=print_verbose,
|
||||
)
|
||||
pass
|
||||
except:
|
||||
pass
|
||||
elif callback == "lite_debugger":
|
||||
print_verbose("reaches litedebugger for logging!")
|
||||
model = self.model
|
||||
messages = self.messages
|
||||
print_verbose(f"liteDebuggerClient: {liteDebuggerClient}")
|
||||
liteDebuggerClient.input_log_event(
|
||||
model=model,
|
||||
messages=messages,
|
||||
end_user=litellm._thread_context.user,
|
||||
litellm_call_id=self.litellm_params["litellm_call_id"],
|
||||
print_verbose=print_verbose,
|
||||
)
|
||||
except Exception as e:
|
||||
print_verbose(f"LiteLLM.LoggingError: [Non-Blocking] Exception occurred while input logging with integrations {traceback.format_exc()}")
|
||||
print_verbose(
|
||||
f"LiteLLM.Logging: is sentry capture exception initialized {capture_exception}"
|
||||
)
|
||||
if capture_exception: # log this error to sentry for debugging
|
||||
capture_exception(e)
|
||||
except:
|
||||
print_verbose(
|
||||
f"LiteLLM.LoggingError: [Non-Blocking] Exception occurred while logging {traceback.format_exc()}"
|
||||
)
|
||||
pass
|
||||
print_verbose(
|
||||
f"LiteLLM.Logging: is sentry capture exception initialized {capture_exception}"
|
||||
)
|
||||
if capture_exception: # log this error to sentry for debugging
|
||||
capture_exception(e)
|
||||
|
||||
def post_call(self, input, api_key, original_response, additional_args={}):
|
||||
# Do something here
|
||||
|
@ -220,9 +243,6 @@ class Logging:
|
|||
f"LiteLLM.LoggingError: [Non-Blocking] Exception occurred while logging {traceback.format_exc()}"
|
||||
)
|
||||
pass
|
||||
|
||||
# Add more methods as needed
|
||||
|
||||
|
||||
def exception_logging(
|
||||
additional_args={},
|
||||
|
@ -257,11 +277,16 @@ def exception_logging(
|
|||
####### CLIENT ###################
|
||||
# make it easy to log if completion/embedding runs succeeded or failed + see what happened | Non-Blocking
|
||||
def client(original_function):
|
||||
global liteDebuggerClient
|
||||
def function_setup(
|
||||
*args, **kwargs
|
||||
): # just run once to check if user wants to send their data anywhere - PostHog/Sentry/Slack/etc.
|
||||
try:
|
||||
global callback_list, add_breadcrumb, user_logger_fn
|
||||
if litellm.debugger or os.getenv("LITELLM_EMAIL", None) != None: # add to input, success and failure callbacks if user sets debugging to true
|
||||
litellm.input_callback.append("lite_debugger")
|
||||
litellm.success_callback.append("lite_debugger")
|
||||
litellm.failure_callback.append("lite_debugger")
|
||||
if (
|
||||
len(litellm.input_callback) > 0 or len(litellm.success_callback) > 0 or len(litellm.failure_callback) > 0
|
||||
) and len(callback_list) == 0:
|
||||
|
@ -387,6 +412,9 @@ def client(original_function):
|
|||
args=(e, traceback_exception, start_time, end_time, args, kwargs),
|
||||
) # don't interrupt execution of main thread
|
||||
my_thread.start()
|
||||
if hasattr(e, "message"):
|
||||
if liteDebuggerClient and liteDebuggerClient.dashboard_url != None: # make it easy to get to the debugger logs if you've initialized it
|
||||
e.message += f"\n Check the log in your dashboard - {liteDebuggerClient.dashboard_url}"
|
||||
raise e
|
||||
|
||||
return wrapper
|
||||
|
@ -626,7 +654,7 @@ def load_test_model(
|
|||
|
||||
|
||||
def set_callbacks(callback_list):
|
||||
global sentry_sdk_instance, capture_exception, add_breadcrumb, posthog, slack_app, alerts_channel, heliconeLogger, aispendLogger, berrispendLogger, supabaseClient
|
||||
global sentry_sdk_instance, capture_exception, add_breadcrumb, posthog, slack_app, alerts_channel, heliconeLogger, aispendLogger, berrispendLogger, supabaseClient, liteDebuggerClient
|
||||
try:
|
||||
for callback in callback_list:
|
||||
print(f"callback: {callback}")
|
||||
|
@ -688,12 +716,15 @@ def set_callbacks(callback_list):
|
|||
elif callback == "supabase":
|
||||
print(f"instantiating supabase")
|
||||
supabaseClient = Supabase()
|
||||
elif callback == "lite_debugger":
|
||||
print(f"instantiating lite_debugger")
|
||||
liteDebuggerClient = LiteDebugger(email=litellm.email)
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
|
||||
def handle_failure(exception, traceback_exception, start_time, end_time, args, kwargs):
|
||||
global sentry_sdk_instance, capture_exception, add_breadcrumb, posthog, slack_app, alerts_channel, aispendLogger, berrispendLogger
|
||||
global sentry_sdk_instance, capture_exception, add_breadcrumb, posthog, slack_app, alerts_channel, aispendLogger, berrispendLogger, supabaseClient, liteDebuggerClient
|
||||
try:
|
||||
# print_verbose(f"handle_failure args: {args}")
|
||||
# print_verbose(f"handle_failure kwargs: {kwargs}")
|
||||
|
@ -794,6 +825,7 @@ def handle_failure(exception, traceback_exception, start_time, end_time, args, k
|
|||
)
|
||||
elif callback == "supabase":
|
||||
print_verbose("reaches supabase for logging!")
|
||||
print_verbose(f"supabaseClient: {supabaseClient}")
|
||||
model = args[0] if len(args) > 0 else kwargs["model"]
|
||||
messages = args[1] if len(args) > 1 else kwargs["messages"]
|
||||
result = {
|
||||
|
@ -817,6 +849,32 @@ def handle_failure(exception, traceback_exception, start_time, end_time, args, k
|
|||
litellm_call_id=kwargs["litellm_call_id"],
|
||||
print_verbose=print_verbose,
|
||||
)
|
||||
elif callback == "lite_debugger":
|
||||
print_verbose("reaches lite_debugger for logging!")
|
||||
print_verbose(f"liteDebuggerClient: {liteDebuggerClient}")
|
||||
model = args[0] if len(args) > 0 else kwargs["model"]
|
||||
messages = args[1] if len(args) > 1 else kwargs["messages"]
|
||||
result = {
|
||||
"model": model,
|
||||
"created": time.time(),
|
||||
"error": traceback_exception,
|
||||
"usage": {
|
||||
"prompt_tokens": prompt_token_calculator(
|
||||
model, messages=messages
|
||||
),
|
||||
"completion_tokens": 0,
|
||||
},
|
||||
}
|
||||
liteDebuggerClient.log_event(
|
||||
model=model,
|
||||
messages=messages,
|
||||
end_user=litellm._thread_context.user,
|
||||
response_obj=result,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
litellm_call_id=kwargs["litellm_call_id"],
|
||||
print_verbose=print_verbose,
|
||||
)
|
||||
except:
|
||||
print_verbose(
|
||||
f"Error Occurred while logging failure: {traceback.format_exc()}"
|
||||
|
@ -837,7 +895,7 @@ def handle_failure(exception, traceback_exception, start_time, end_time, args, k
|
|||
|
||||
|
||||
def handle_success(args, kwargs, result, start_time, end_time):
|
||||
global heliconeLogger, aispendLogger
|
||||
global heliconeLogger, aispendLogger, supabaseClient, liteDebuggerClient
|
||||
try:
|
||||
success_handler = additional_details.pop("success_handler", None)
|
||||
failure_handler = additional_details.pop("failure_handler", None)
|
||||
|
@ -904,7 +962,7 @@ def handle_success(args, kwargs, result, start_time, end_time):
|
|||
print_verbose("reaches supabase for logging!")
|
||||
model = args[0] if len(args) > 0 else kwargs["model"]
|
||||
messages = args[1] if len(args) > 1 else kwargs["messages"]
|
||||
print(f"litellm._thread_context: {litellm._thread_context}")
|
||||
print(f"supabaseClient: {supabaseClient}")
|
||||
supabaseClient.log_event(
|
||||
model=model,
|
||||
messages=messages,
|
||||
|
@ -915,6 +973,21 @@ def handle_success(args, kwargs, result, start_time, end_time):
|
|||
litellm_call_id=kwargs["litellm_call_id"],
|
||||
print_verbose=print_verbose,
|
||||
)
|
||||
elif callback == "lite_debugger":
|
||||
print_verbose("reaches lite_debugger for logging!")
|
||||
model = args[0] if len(args) > 0 else kwargs["model"]
|
||||
messages = args[1] if len(args) > 1 else kwargs["messages"]
|
||||
print_verbose(f"liteDebuggerClient: {liteDebuggerClient}")
|
||||
liteDebuggerClient.log_event(
|
||||
model=model,
|
||||
messages=messages,
|
||||
end_user=litellm._thread_context.user,
|
||||
response_obj=result,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
litellm_call_id=kwargs["litellm_call_id"],
|
||||
print_verbose=print_verbose,
|
||||
)
|
||||
except Exception as e:
|
||||
## LOGGING
|
||||
exception_logging(logger_fn=user_logger_fn, exception=e)
|
||||
|
@ -935,6 +1008,9 @@ def handle_success(args, kwargs, result, start_time, end_time):
|
|||
pass
|
||||
|
||||
|
||||
def acreate(*args, **kwargs): ## Thin client to handle the acreate langchain call
|
||||
return litellm.acompletion(*args, **kwargs)
|
||||
|
||||
def prompt_token_calculator(model, messages):
|
||||
# use tiktoken or anthropic's tokenizer depending on the model
|
||||
text = " ".join(message["content"] for message in messages)
|
||||
|
@ -949,6 +1025,16 @@ def prompt_token_calculator(model, messages):
|
|||
num_tokens = len(encoding.encode(text))
|
||||
return num_tokens
|
||||
|
||||
def valid_model(model):
|
||||
try:
|
||||
# for a given model name, check if the user has the right permissions to access the model
|
||||
if model in litellm.open_ai_chat_completion_models or model in litellm.open_ai_text_completion_models:
|
||||
openai.Model.retrieve(model)
|
||||
else:
|
||||
messages = [{"role": "user", "content": "Hello World"}]
|
||||
litellm.completion(model=model, messages=messages)
|
||||
except:
|
||||
raise InvalidRequestError(message="", model=model, llm_provider="")
|
||||
|
||||
# integration helper function
|
||||
def modify_integration(integration_name, integration_params):
|
||||
|
@ -958,8 +1044,9 @@ def modify_integration(integration_name, integration_params):
|
|||
Supabase.supabase_table_name = integration_params["table_name"]
|
||||
|
||||
|
||||
####### EXCEPTION MAPPING ################
|
||||
def exception_type(model, original_exception, custom_llm_provider):
|
||||
global user_logger_fn
|
||||
global user_logger_fn, liteDebuggerClient
|
||||
exception_mapping_worked = False
|
||||
try:
|
||||
if isinstance(original_exception, OriginalError):
|
||||
|
@ -1099,6 +1186,7 @@ def exception_type(model, original_exception, custom_llm_provider):
|
|||
raise original_exception
|
||||
|
||||
|
||||
####### CRASH REPORTING ################
|
||||
def safe_crash_reporting(model=None, exception=None, custom_llm_provider=None):
|
||||
data = {
|
||||
"model": model,
|
||||
|
@ -1297,7 +1385,7 @@ async def stream_to_string(generator):
|
|||
return response
|
||||
|
||||
|
||||
########## Together AI streaming #############################
|
||||
########## Together AI streaming ############################# [TODO] move together ai to it's own llm class
|
||||
async def together_ai_completion_streaming(json_data, headers):
|
||||
session = aiohttp.ClientSession()
|
||||
url = "https://api.together.xyz/inference"
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "litellm"
|
||||
version = "0.1.431"
|
||||
version = "0.1.448"
|
||||
description = "Library to easily interface with LLM API providers"
|
||||
authors = ["BerriAI"]
|
||||
license = "MIT License"
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue