Merge pull request #2659 from vivek-athina/athina-doc-updates-and-a-minor-fix

Athina docs updated with information about additional fields and a minor fix in the callback
This commit is contained in:
Krish Dholakia 2024-03-23 15:54:32 -07:00 committed by GitHub
commit 5bcf92f4f5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 30 additions and 3 deletions

View file

@ -41,6 +41,35 @@ response = completion(
) )
``` ```
## Additional information in metadata
You can send some additional information to Athina by using the `metadata` field in completion. This can be useful for sending metadata about the request, such as the customer_id, prompt_slug, or any other information you want to track.
```python
#openai call with additional metadata
response = completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
],
metadata={
"environment": "staging",
"prompt_slug": "my_prompt_slug/v1"
}
)
```
Following are the allowed fields in metadata, their types, and their descriptions:
* `environment: Optional[str]` - Environment your app is running in (ex: production, staging, etc). This is useful for segmenting inference calls by environment.
* `prompt_slug: Optional[str]` - Identifier for the prompt used for inference. This is useful for segmenting inference calls by prompt.
* `customer_id: Optional[str]` - This is your customer ID. This is useful for segmenting inference calls by customer.
* `customer_user_id: Optional[str]` - This is the end user ID. This is useful for segmenting inference calls by the end user.
* `session_id: Optional[str]` - is the session or conversation ID. This is used for grouping different inferences into a conversation or chain. [Read more].(https://docs.athina.ai/logging/grouping_inferences)
* `external_reference_id: Optional[str]` - This is useful if you want to associate your own internal identifier with the inference logged to Athina.
* `context: Optional[Union[dict, str]]` - This is the context used as information for the prompt. For RAG applications, this is the "retrieved" data. You may log context as a string or as an object (dictionary).
* `expected_response: Optional[str]` - This is the reference response to compare against for evaluation purposes. This is useful for segmenting inference calls by expected response.
* `user_query: Optional[str]` - This is the user's query. For conversational applications, this is the user's last message.
## Support & Talk with Athina Team ## Support & Talk with Athina Team
- [Schedule Demo 👋](https://cal.com/shiv-athina/30min) - [Schedule Demo 👋](https://cal.com/shiv-athina/30min)

View file

@ -10,7 +10,7 @@ class AthinaLogger:
"Content-Type": "application/json" "Content-Type": "application/json"
} }
self.athina_logging_url = "https://log.athina.ai/api/v1/log/inference" self.athina_logging_url = "https://log.athina.ai/api/v1/log/inference"
self.additional_keys = ["environment", "prompt_slug", "customer_id", "customer_user_id", "session_id", "external_reference_id", "context", "expected_response"] self.additional_keys = ["environment", "prompt_slug", "customer_id", "customer_user_id", "session_id", "external_reference_id", "context", "expected_response", "user_query"]
def log_event(self, kwargs, response_obj, start_time, end_time, print_verbose): def log_event(self, kwargs, response_obj, start_time, end_time, print_verbose):
import requests import requests
@ -32,8 +32,6 @@ class AthinaLogger:
if "messages" in kwargs: if "messages" in kwargs:
data["prompt"] = kwargs.get("messages", None) data["prompt"] = kwargs.get("messages", None)
if kwargs.get("messages") and len(kwargs.get("messages")) > 0:
data["user_query"] = kwargs.get("messages")[0].get("content", None)
# Directly add tools or functions if present # Directly add tools or functions if present
optional_params = kwargs.get("optional_params", {}) optional_params = kwargs.get("optional_params", {})