mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
add to hosted debugger back to docs
This commit is contained in:
parent
bea862827e
commit
6da73d06d3
3 changed files with 48 additions and 87 deletions
|
@ -1,11 +1,15 @@
|
||||||
import Image from '@theme/IdealImage';
|
import Image from '@theme/IdealImage';
|
||||||
import QueryParamReader from '../../src/components/queryParamReader.js'
|
import QueryParamReader from '../../src/components/queryParamReader.js'
|
||||||
|
|
||||||
# Debug + Deploy LLMs [UI]
|
# [Beta] Monitor Logs in Production
|
||||||
|
|
||||||
LiteLLM offers a UI to:
|
:::note
|
||||||
* 1-Click Deploy LLMs - the client stores your api keys + model configurations
|
|
||||||
* Debug your Call Logs
|
This is in beta. Expect frequent updates, as we improve based on your feedback.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
LiteLLM provides an integration to let you monitor logs in production.
|
||||||
|
|
||||||
👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/
|
👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/
|
||||||
|
|
||||||
|
@ -18,16 +22,49 @@ LiteLLM offers a UI to:
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
|
|
||||||
### 1. Make a normal `completion()` call
|
### 1. Get your LiteLLM Token
|
||||||
|
|
||||||
|
Go to [admin.litellm.ai](https://admin.litellm.ai/) and copy the code snippet with your unique token
|
||||||
|
|
||||||
|
<Image img={require('../../img/hosted_debugger_usage_page.png')} alt="Usage" />
|
||||||
|
|
||||||
|
### 2. Set up your environment
|
||||||
|
|
||||||
|
**Add it to your .env**
|
||||||
|
|
||||||
```
|
```
|
||||||
pip install litellm
|
import os
|
||||||
|
|
||||||
|
os.env["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
<QueryParamReader/>
|
**Turn on LiteLLM Client**
|
||||||
|
```
|
||||||
|
import litellm
|
||||||
|
litellm.client = True
|
||||||
|
```
|
||||||
|
|
||||||
### 2. Check request state
|
### 3. Make a normal `completion()` call
|
||||||
All `completion()` calls print with a link to your session dashboard
|
```
|
||||||
|
import litellm
|
||||||
|
from litellm import completion
|
||||||
|
import os
|
||||||
|
|
||||||
|
# set env variables
|
||||||
|
os.environ["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
|
||||||
|
os.environ["OPENAI_API_KEY"] = "openai key"
|
||||||
|
|
||||||
|
litellm.use_client = True # enable logging dashboard
|
||||||
|
messages = [{ "content": "Hello, how are you?","role": "user"}]
|
||||||
|
|
||||||
|
# openai call
|
||||||
|
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||||
|
```
|
||||||
|
|
||||||
|
Your `completion()` call print with a link to your session dashboard (https://admin.litellm.ai/<your_unique_token>)
|
||||||
|
|
||||||
|
In the above case it would be: [`admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb`](https://admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb)
|
||||||
|
|
||||||
Click on your personal dashboard link. Here's how you can find it 👇
|
Click on your personal dashboard link. Here's how you can find it 👇
|
||||||
|
|
||||||
|
@ -51,81 +88,4 @@ Ah! So we can see that this request was made to a **Baseten** (see litellm_param
|
||||||
|
|
||||||
🎉 Congratulations! You've successfully debugger your first log!
|
🎉 Congratulations! You've successfully debugger your first log!
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Deploy your first LLM
|
|
||||||
|
|
||||||
LiteLLM also lets you to add a new model to your project - without touching code **or** using a proxy server.
|
|
||||||
|
|
||||||
### 1. Add new model
|
|
||||||
On the same debugger dashboard we just made, just go to the 'Add New LLM' Section:
|
|
||||||
* Select Provider
|
|
||||||
* Select your LLM
|
|
||||||
* Add your LLM Key
|
|
||||||
|
|
||||||
<Image img={require('../../img/add_model.png')} alt="Dashboard" />
|
|
||||||
|
|
||||||
This works with any model on - Replicate, Together_ai, Baseten, Anthropic, Cohere, AI21, OpenAI, Azure, VertexAI (Google Palm), OpenRouter
|
|
||||||
|
|
||||||
After adding your new LLM, LiteLLM securely stores your API key and model configs.
|
|
||||||
|
|
||||||
[👋 Tell us if you need to self-host **or** integrate with your key manager](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)
|
|
||||||
|
|
||||||
|
|
||||||
### 2. Test new model Using `completion()`
|
|
||||||
Once you've added your models LiteLLM completion calls will just work for those models + providers.
|
|
||||||
|
|
||||||
```python
|
|
||||||
import litellm
|
|
||||||
from litellm import completion
|
|
||||||
litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # use your token
|
|
||||||
messages = [{ "content": "Hello, how are you?" ,"role": "user"}]
|
|
||||||
|
|
||||||
# no need to set key, LiteLLM Client reads your set key
|
|
||||||
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. [Bonus] Get available model list
|
|
||||||
|
|
||||||
Get a list of all models you've created through the Dashboard with 1 function call
|
|
||||||
|
|
||||||
```python
|
|
||||||
import litellm
|
|
||||||
|
|
||||||
litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # use your token
|
|
||||||
|
|
||||||
litellm.get_model_list()
|
|
||||||
```
|
|
||||||
## Persisting your dashboard
|
|
||||||
If you want to use the same dashboard for your project set
|
|
||||||
`litellm.token` in code or your .env as `LITELLM_TOKEN`
|
|
||||||
All generated dashboards come with a token
|
|
||||||
```python
|
|
||||||
import litellm
|
|
||||||
litellm.token = "80888ede-4881-4876-ab3f-765d47282e66"
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Additional Information
|
|
||||||
### LiteLLM Dashboard - Debug Logs
|
|
||||||
All your `completion()` and `embedding()` call logs are available on `admin.litellm.ai/<your-token>`
|
|
||||||
|
|
||||||
|
|
||||||
#### Debug Logs for `completion()` and `embedding()`
|
|
||||||
<Image img={require('../../img/lite_logs.png')} alt="Dashboard" />
|
|
||||||
|
|
||||||
#### Viewing Errors on debug logs
|
|
||||||
<Image img={require('../../img/lite_logs2.png')} alt="Dashboard" />
|
|
||||||
|
|
||||||
|
|
||||||
### Opt-Out of using LiteLLM Client
|
|
||||||
If you want to opt out of using LiteLLM client you can set
|
|
||||||
```python
|
|
||||||
litellm.use_client = True
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
BIN
docs/my-website/img/hosted_debugger_usage_page.png
Normal file
BIN
docs/my-website/img/hosted_debugger_usage_page.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 329 KiB |
|
@ -50,6 +50,7 @@ const sidebars = {
|
||||||
"token_usage",
|
"token_usage",
|
||||||
"exception_mapping",
|
"exception_mapping",
|
||||||
'debugging/local_debugging',
|
'debugging/local_debugging',
|
||||||
|
'debugging/hosted_debugging',
|
||||||
{
|
{
|
||||||
type: 'category',
|
type: 'category',
|
||||||
label: 'Tutorials',
|
label: 'Tutorials',
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue