forked from phoenix/litellm-mirror
adding local debugging to docs
This commit is contained in:
parent
ecdc7abfd8
commit
ef28853346
8 changed files with 123 additions and 54 deletions
57
docs/my-website/docs/debugging/local_debugging.md
Normal file
57
docs/my-website/docs/debugging/local_debugging.md
Normal file
|
@ -0,0 +1,57 @@
|
|||
# Local Debugging
|
||||
There's 2 ways to do local debugging - `litellm.set_verbose=True` and by passing in a custom function `completion(...logger_fn=<your_local_function>)`
|
||||
|
||||
## Set Verbose
|
||||
|
||||
This is good for getting print statements for everything litellm is doing.
|
||||
```
|
||||
from litellm import completion
|
||||
|
||||
litellm.set_verbose=True # 👈 this is the 1-line change you need to make
|
||||
|
||||
## set ENV variables
|
||||
os.environ["OPENAI_API_KEY"] = "openai key"
|
||||
os.environ["COHERE_API_KEY"] = "cohere key"
|
||||
|
||||
messages = [{ "content": "Hello, how are you?","role": "user"}]
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||
|
||||
# cohere call
|
||||
response = completion("command-nightly", messages)
|
||||
```
|
||||
|
||||
## Logger Function
|
||||
But sometimes all you care about is seeing exactly what's getting sent to your api call and what's being returned - e.g. if the api call is failing, why is that happening? what are the exact params being set?
|
||||
|
||||
In that case, LiteLLM allows you to pass in a custom logging function to see / modify the model call Input/Outputs.
|
||||
|
||||
**Note**: We expect you to accept a dict object.
|
||||
|
||||
Your custom function
|
||||
|
||||
```
|
||||
def my_custom_logging_fn(model_call_dict):
|
||||
print(f"model call details: {model_call_dict}")
|
||||
```
|
||||
|
||||
### Complete Example
|
||||
```
|
||||
from litellm import completion
|
||||
|
||||
def my_custom_logging_fn(model_call_dict):
|
||||
print(f"model call details: {model_call_dict}")
|
||||
|
||||
## set ENV variables
|
||||
os.environ["OPENAI_API_KEY"] = "openai key"
|
||||
os.environ["COHERE_API_KEY"] = "cohere key"
|
||||
|
||||
messages = [{ "content": "Hello, how are you?","role": "user"}]
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=messages, logger_fn=my_custom_logging_fn)
|
||||
|
||||
# cohere call
|
||||
response = completion("command-nightly", messages, logger_fn=my_custom_logging_fn)
|
||||
```
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
displayed_sidebar: tutorialSidebar
|
||||
---
|
||||
|
||||
# litellm
|
||||
[](https://pypi.org/project/litellm/)
|
||||
[](https://pypi.org/project/litellm/0.1.1/)
|
||||
|
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Tutorial Intro
|
||||
|
||||
Let's discover **Docusaurus in less than 5 minutes**.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Get started by **creating a new site**.
|
||||
|
||||
Or **try Docusaurus immediately** with **[docusaurus.new](https://docusaurus.new)**.
|
||||
|
||||
### What you'll need
|
||||
|
||||
- [Node.js](https://nodejs.org/en/download/) version 16.14 or above:
|
||||
- When installing Node.js, you are recommended to check all checkboxes related to dependencies.
|
||||
|
||||
## Generate a new site
|
||||
|
||||
Generate a new Docusaurus site using the **classic template**.
|
||||
|
||||
The classic template will automatically be added to your project after you run the command:
|
||||
|
||||
```bash
|
||||
npm init docusaurus@latest my-website classic
|
||||
```
|
||||
|
||||
You can type this command into Command Prompt, Powershell, Terminal, or any other integrated terminal of your code editor.
|
||||
|
||||
The command also installs all necessary dependencies you need to run Docusaurus.
|
||||
|
||||
## Start your site
|
||||
|
||||
Run the development server:
|
||||
|
||||
```bash
|
||||
cd my-website
|
||||
npm run start
|
||||
```
|
||||
|
||||
The `cd` command changes the directory you're working with. In order to work with your newly created Docusaurus site, you'll need to navigate the terminal there.
|
||||
|
||||
The `npm run start` command builds your website locally and serves it through a development server, ready for you to view at http://localhost:3000/.
|
||||
|
||||
Open `docs/intro.md` (this page) and edit some lines: the site **reloads automatically** and displays your changes.
|
32
docs/my-website/docs/tutorials/debugging_tutorial.md
Normal file
32
docs/my-website/docs/tutorials/debugging_tutorial.md
Normal file
|
@ -0,0 +1,32 @@
|
|||
# Debugging UI Tutorial
|
||||
LiteLLM offers a free hosted debugger UI for your api calls. Useful if you're testing your LiteLLM server and need to see if the API calls were made successfully.
|
||||
|
||||
You can enable this setting `lite_debugger` as a callback.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
import litellm
|
||||
from litellm import embedding, completion
|
||||
|
||||
litellm.input_callback = ["lite_debugger"]
|
||||
litellm.success_callback = ["lite_debugger"]
|
||||
litellm.failure_callback = ["lite_debugger"]
|
||||
|
||||
litellm.set_verbose = True
|
||||
|
||||
user_message = "Hello, how are you?"
|
||||
messages = [{ "content": user_message,"role": "user"}]
|
||||
|
||||
|
||||
# openai call
|
||||
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
||||
|
||||
# bad request call
|
||||
response = completion(model="chatgpt-test", messages=[{"role": "user", "content": "Hi 👋 - i'm a bad request"}])
|
||||
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
## How to see the UI
|
21
docs/my-website/docs/tutorials/react_test.js
Normal file
21
docs/my-website/docs/tutorials/react_test.js
Normal file
|
@ -0,0 +1,21 @@
|
|||
import React from 'react';
|
||||
import Layout from '@theme/Layout';
|
||||
|
||||
export default function Hello() {
|
||||
return (
|
||||
<Layout title="Hello" description="Hello React Page">
|
||||
<div
|
||||
style={{
|
||||
display: 'flex',
|
||||
justifyContent: 'center',
|
||||
alignItems: 'center',
|
||||
height: '50vh',
|
||||
fontSize: '20px',
|
||||
}}>
|
||||
<p>
|
||||
Edit <code>pages/helloReact.js</code> and save to reload.
|
||||
</p>
|
||||
</div>
|
||||
</Layout>
|
||||
);
|
||||
}
|
|
@ -35,9 +35,7 @@ const config = {
|
|||
docs: {
|
||||
sidebarPath: require.resolve('./sidebars.js'),
|
||||
},
|
||||
blog: {
|
||||
showReadingTime: true,
|
||||
},
|
||||
blog: false, // Optional: disable the blog plugin
|
||||
theme: {
|
||||
customCss: require.resolve('./src/css/custom.css'),
|
||||
},
|
||||
|
|
|
@ -14,11 +14,10 @@
|
|||
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
|
||||
const sidebars = {
|
||||
// // By default, Docusaurus generates a sidebar from the docs folder structure
|
||||
// tutorialSidebar: [{type: 'autogenerated', dirName: '.'}],
|
||||
|
||||
// But you can create a sidebar manually
|
||||
tutorialSidebar: [
|
||||
'index',
|
||||
{ type: "doc", id: "index" }, // NEW
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Completion()',
|
||||
|
@ -29,11 +28,12 @@ const sidebars = {
|
|||
label: 'Embedding()',
|
||||
items: ['embedding/supported_embedding'],
|
||||
},
|
||||
'debugging/local_debugging',
|
||||
'completion/supported',
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Tutorials',
|
||||
items: ['tutorials/huggingface_tutorial', 'tutorials/TogetherAI_liteLLM'],
|
||||
items: ['tutorials/huggingface_tutorial', 'tutorials/TogetherAI_liteLLM', 'tutorials/debugging_tutorial'],
|
||||
},
|
||||
'token_usage',
|
||||
'stream',
|
||||
|
|
|
@ -1,4 +1,8 @@
|
|||
# *🚅 litellm*
|
||||
---
|
||||
displayed_sidebar: tutorialSidebar
|
||||
---
|
||||
|
||||
# litellm
|
||||
[](https://pypi.org/project/litellm/)
|
||||
[](https://pypi.org/project/litellm/0.1.1/)
|
||||
[](https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue