(docs) proxy, show users how to use detailed_debug

This commit is contained in:
ishaan-jaff 2024-01-08 12:58:23 +05:30
parent 6786e4f343
commit c5589e71e7
2 changed files with 24 additions and 0 deletions

View file

@ -82,6 +82,15 @@ Cli arguments, --host, --port, --num_workers
litellm --debug
```
#### --detailed_debug
- **Default:** `False`
- **Type:** `bool` (Flag)
- Enable debugging mode for the input.
- **Usage:**
```shell
litellm --detailed_debug
``
#### --temperature
- **Default:** `None`
- **Type:** `float`

View file

@ -947,6 +947,13 @@ Run the proxy with `--debug` to easily view debug logs
litellm --model gpt-3.5-turbo --debug
```
### Detailed Debug Logs
Run the proxy with `--detailed_debug` to view detailed debug logs
```shell
litellm --model gpt-3.5-turbo --detailed_debug
```
When making requests you should see the POST request sent by LiteLLM to the LLM on the Terminal output
```shell
POST Request Sent from LiteLLM:
@ -1281,6 +1288,14 @@ LiteLLM proxy adds **0.00325 seconds** latency as compared to using the Raw Open
```shell
litellm --debug
```
#### --detailed_debug
- **Default:** `False`
- **Type:** `bool` (Flag)
- Enable debugging mode for the input.
- **Usage:**
```shell
litellm --detailed_debug
```
#### --temperature
- **Default:** `None`