diff --git a/docs/my-website/docs/proxy/logging.md b/docs/my-website/docs/proxy/logging.md index 335a3d502..ec6f2ecbe 100644 --- a/docs/my-website/docs/proxy/logging.md +++ b/docs/my-website/docs/proxy/logging.md @@ -160,144 +160,71 @@ On Success Response: {'id': 'chatcmpl-8S8avKJ1aVBg941y5xzGMSKrYCMvN', 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': 'Good morning! How can I assist you today?', 'role': 'assistant'}}], 'created': 1701716913, 'model': 'gpt-3.5-turbo-0613', 'object': 'chat.completion', 'system_fingerprint': None, 'usage': {'completion_tokens': 10, 'prompt_tokens': 11, 'total_tokens': 21}} Proxy Metadata: {'user_api_key': None, 'headers': Headers({'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'authorization': 'Bearer sk-1234', 'content-length': '199', 'content-type': 'application/x-www-form-urlencoded'}), 'model_group': 'gpt-3.5-turbo', 'deployment': 'gpt-3.5-turbo-ModelID-gpt-3.5-turbo'} ``` - +### Logging `model_info` set in config.yaml + +Here is how to log the `model_info` set in your proxy `config.yaml`. Information on setting `model_info` on [config.yaml](https://docs.litellm.ai/docs/proxy/configs) + +```python +class MyCustomHandler(CustomLogger): + async def async_log_success_event(self, kwargs, response_obj, start_time, end_time): + print(f"On Async Success!") + + litellm_params = kwargs.get("litellm_params", None) + model_info = litellm_params.get("model_info") + print(model_info) +``` + +**Expected Output** +```json +{'mode': 'embedding', 'input_cost_per_token': 0.002} +``` + +### Logging LLM Responses ## OpenTelemetry, ElasticSearch