(docs) using litellm proxy + Langfuse

This commit is contained in:
ishaan-jaff 2023-11-15 13:52:47 -08:00
parent b10e7b7973
commit a0ff9e7d7b

View file

@ -710,6 +710,34 @@ https://api.openai.com/v1/chat/completions \
```
## Logging Proxy Input/Output - Langfuse
We will use the `--config` with the proxy for logging input/output to Langfuse
- We will use the `litellm.success_callback = ["langfuse"]` this will log all successfull LLM calls to langfuse
**Step 1**: Create a `config.yaml` file
```yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
- model_name: gpt-4-team1
litellm_params:
model: azure/chatgpt-v-2
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_version: "2023-05-15"
litellm_settings:
success_callback: ["langfuse"]
```
**Step 2**: Start the proxy, make a test request
```shell
litellm --model gpt-3.5-turbo --debug
```
Test Request
```
litellm --test
```
## Proxy CLI Arguments