mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 19:54:13 +00:00
(docs) using litellm proxy + Langfuse
This commit is contained in:
parent
b10e7b7973
commit
a0ff9e7d7b
1 changed files with 28 additions and 0 deletions
|
@ -710,6 +710,34 @@ https://api.openai.com/v1/chat/completions \
|
|||
```
|
||||
|
||||
## Logging Proxy Input/Output - Langfuse
|
||||
We will use the `--config` with the proxy for logging input/output to Langfuse
|
||||
- We will use the `litellm.success_callback = ["langfuse"]` this will log all successfull LLM calls to langfuse
|
||||
|
||||
**Step 1**: Create a `config.yaml` file
|
||||
```yaml
|
||||
model_list:
|
||||
- model_name: gpt-3.5-turbo
|
||||
litellm_params:
|
||||
model: gpt-3.5-turbo
|
||||
- model_name: gpt-4-team1
|
||||
litellm_params:
|
||||
model: azure/chatgpt-v-2
|
||||
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
|
||||
api_version: "2023-05-15"
|
||||
litellm_settings:
|
||||
success_callback: ["langfuse"]
|
||||
```
|
||||
|
||||
**Step 2**: Start the proxy, make a test request
|
||||
```shell
|
||||
litellm --model gpt-3.5-turbo --debug
|
||||
```
|
||||
|
||||
Test Request
|
||||
```
|
||||
litellm --test
|
||||
```
|
||||
|
||||
|
||||
## Proxy CLI Arguments
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue