mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
(docs) proxy, show users how to use detailed_debug
This commit is contained in:
parent
6786e4f343
commit
c5589e71e7
2 changed files with 24 additions and 0 deletions
|
@ -82,6 +82,15 @@ Cli arguments, --host, --port, --num_workers
|
|||
litellm --debug
|
||||
```
|
||||
|
||||
#### --detailed_debug
|
||||
- **Default:** `False`
|
||||
- **Type:** `bool` (Flag)
|
||||
- Enable debugging mode for the input.
|
||||
- **Usage:**
|
||||
```shell
|
||||
litellm --detailed_debug
|
||||
``
|
||||
|
||||
#### --temperature
|
||||
- **Default:** `None`
|
||||
- **Type:** `float`
|
||||
|
|
|
@ -947,6 +947,13 @@ Run the proxy with `--debug` to easily view debug logs
|
|||
litellm --model gpt-3.5-turbo --debug
|
||||
```
|
||||
|
||||
### Detailed Debug Logs
|
||||
|
||||
Run the proxy with `--detailed_debug` to view detailed debug logs
|
||||
```shell
|
||||
litellm --model gpt-3.5-turbo --detailed_debug
|
||||
```
|
||||
|
||||
When making requests you should see the POST request sent by LiteLLM to the LLM on the Terminal output
|
||||
```shell
|
||||
POST Request Sent from LiteLLM:
|
||||
|
@ -1281,6 +1288,14 @@ LiteLLM proxy adds **0.00325 seconds** latency as compared to using the Raw Open
|
|||
```shell
|
||||
litellm --debug
|
||||
```
|
||||
#### --detailed_debug
|
||||
- **Default:** `False`
|
||||
- **Type:** `bool` (Flag)
|
||||
- Enable debugging mode for the input.
|
||||
- **Usage:**
|
||||
```shell
|
||||
litellm --detailed_debug
|
||||
```
|
||||
|
||||
#### --temperature
|
||||
- **Default:** `None`
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue